This document provides an overview of a MongoDB workshop. It includes an agenda with topics like an overview of databases, what is MongoDB, MongoDB commands, sharding and replication in MongoDB, and a demo. The workshop is hosted by Vivian at ThoughtWorks for the audience of NYC Open Data and will be presented by Kannan Sankaran and Roman Kubiak.
[Meetup] a successful migration from elastic search to clickhouseVianney FOUCAULT
Paris Clickhouse meetup 2019: How Contentsquare successfully migrated to Clickhouse !
Discover the subtleties of a migration to Clickhouse. What to check before hand, then how to operate clickhouse in Production
Benchmarking is hard. Benchmarking databases, harder. Benchmarking databases that follow different approaches (relational vs document) is even harder.
But the market demands these kinds of benchmarks. Despite the different data models that MongoDB and PostgreSQL expose, many organizations face the challenge of picking either technology. And performance is arguably the main deciding factor.
Join this talk to discover the numbers! After $30K spent on public cloud and months of testing, there are many different scenarios to analyze. Benchmarks on three distinct categories have been performed: OLTP, OLAP and comparing MongoDB 4.0 transaction performance with PostgreSQL's.
What would be faster, MongoDB or PostgreSQL?
An Introduction to Confluent Cloud: Apache Kafka as a Serviceconfluent
Business breakout during Confluent’s streaming event in Munich, presented by Hans Jespersen, VP WW Systems Engineering at Confluent. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
Getting Started with Confluent Schema Registryconfluent
Getting started with Confluent Schema Registry, Patrick Druley, Senior Solutions Engineer, Confluent
Meetup link: https://www.meetup.com/Cleveland-Kafka/events/272787313/
[Meetup] a successful migration from elastic search to clickhouseVianney FOUCAULT
Paris Clickhouse meetup 2019: How Contentsquare successfully migrated to Clickhouse !
Discover the subtleties of a migration to Clickhouse. What to check before hand, then how to operate clickhouse in Production
Benchmarking is hard. Benchmarking databases, harder. Benchmarking databases that follow different approaches (relational vs document) is even harder.
But the market demands these kinds of benchmarks. Despite the different data models that MongoDB and PostgreSQL expose, many organizations face the challenge of picking either technology. And performance is arguably the main deciding factor.
Join this talk to discover the numbers! After $30K spent on public cloud and months of testing, there are many different scenarios to analyze. Benchmarks on three distinct categories have been performed: OLTP, OLAP and comparing MongoDB 4.0 transaction performance with PostgreSQL's.
What would be faster, MongoDB or PostgreSQL?
An Introduction to Confluent Cloud: Apache Kafka as a Serviceconfluent
Business breakout during Confluent’s streaming event in Munich, presented by Hans Jespersen, VP WW Systems Engineering at Confluent. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
Getting Started with Confluent Schema Registryconfluent
Getting started with Confluent Schema Registry, Patrick Druley, Senior Solutions Engineer, Confluent
Meetup link: https://www.meetup.com/Cleveland-Kafka/events/272787313/
Debezium is a Kafka Connect plugin that performs Change Data Capture from your database into Kafka. This talk demonstrates how this can be leveraged to move your data from one database platform such as MySQL to PostgreSQL. A working example is available on GitHub (github.com/gh-mlfowler/debezium-demo).
Watch this talk here: https://www.confluent.io/online-talks/apache-kafka-architecture-and-fundamentals-explained-on-demand
This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
-Topics, partitions and segments
-The commit log and streams
-Brokers and broker replication
-Producer basics
-Consumers, consumer groups and offsets
This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
In recent years Zalando has adopted a decentralized setup for applications and databases. This has impacted our database engineers by transferring responsibility to small teams, each of which manages its own infrastructure. Decentralization is great for team autonomy, but can present challenges in terms of how to easily manage lots of PostgreSQL clusters. That’s why our team created Spilo: an open source HA-cluster (highly available PostgreSQL cluster). This talk will show how Spilo simplifies Postgres cluster management while preserving team autonomy. By building upon streaming replication, Spilo provides a set of clusters that require no human interaction for many administration tasks and most failure scenarios; takes care of managing the number of servers (adding and removing them); and creates backups. It implements our own version of Patroni (https://github.com/zalando/patroni): a process, derived from Compose’s Governor, that governs the Postgres cluster (promoting and demoting) and updates information in etcd (the distributed consensus key/value store created by CoreOS). I’ll explore the architecture of Patroni implemented with Spilo; a live demo will show some failovers as they occur. Finally, I’ll show how Spilo combines Patroni with cloud infrastructure architecture components (for example, AWS), adding autoscaling to run a HA-cluster and allowing AWS power users to create a new HA-cluster with very little effort. Spilo relies upon STUPS, Zalando’s open-source platform as a service (PaaS) for enabling multiple, autonomous teams to use AWS while remaining audit-compliant. By using Spilo and STUPS together, our engineers can create a new HA-cluster with just a few commands. After attending this talk the audience will understand how they can also use Spilo, Patroni and STUPS to manage their Postgres clusters more efficiently while working autonomously.
Run Apache Spark on Kubernetes in Large Scale_ Challenges and Solutions-2.pdfAnya Bida
Speaker: Bo Yang
Summary: More and more people are running Apache Spark on Kubernetes due to the popularity of Kubernetes. There are a lot of challenges since Spark was not originally designed for Kubernetes, for example, easily submitting/managing applications, accessing Spark UI, allocating resource queues based on cpu/memory, and etc. This talk will present how to address these challenges and provide Spark As Service in a large scale.
Apache Gobblin: Bridging Batch and Streaming Data Integration. Big Data Meetu...Shirshanka Das
This talk describes the motivations behind Apache Gobblin (incubating), architecture, latest innovations in supporting both batch and streaming data pipelines as well as future roadmap.
Reactive Microservices with Spring 5: WebFlux Trayan Iliev
On November 27 Trayan Iliev from IPT presented “Reactive microservices with Spring 5: WebFlux” @Dev.bg in Betahaus Sofia. IPT – Intellectual Products & Technologies has been organizing Java & JavaScript trainings since 2003.
Spring 5 introduces a new model for end-to-end functional and reactive web service programming with Spring 5 WebFlow, Spring Data & Spring Boot. The main topics include:
– Introduction to reactive programming, Reactive Streams specification, and project Reactor (as WebFlux infrastructure)
– REST services with WebFlux – comparison between annotation-based and functional reactive programming approaches for building.
– Router, handler and filter functions
– Using reactive repositories and reactive database access with Spring Data. Building end-to-end non-blocking reactive web services using Netty-based web runtime
– Reactive WebClients and integration testing. Reactive WebSocket support
– Realtime event streaming to WebClients using JSON Streams, and to JS client using SSE.
PostgreSQL Replication High Availability MethodsMydbops
This slides illustrates the need for replication in PostgreSQL, why do you need a replication DB topology, terminologies, replication nodes and many more.
Next Generation Scheduling for YARN and K8s: For Hybrid Cloud/On-prem Environ...DataWorks Summit
Scheduler of a container orchestration system, such as YARN and K8s, is a critical component that users rely on to plan resources and manage applications.
And if we assess where we are today, in YARN effectively it had two power schedulers (Fair and Capacity scheduler) and both serve many strong use cases in big data ecosystem. It can scale up to 50k nodes per cluster, and schedule 20k containers per second, and extremely efficient to manage batch workloads.
K8s default scheduler is an industry-proven solution to efficiently manage long-running services. As more big data apps are moving to K8s and cloud world, but many features like hierarchical queues to support multi-tenancy better, fairness resource sharing, and preemption, etc. are either missing or not mature enough at this point of time to support big data apps running on K8s.
At this point, there is no solution that exists to address the needs of having a unified resource scheduling experiences across platforms. That makes it extremely difficult to manage workloads running on different environments, from on-premise to cloud.
Hence evolving a common scheduler powered from YARN and K8s’s legacy capabilities and improving towards cloud use cases will focus more on use cases like:
Better bin-packing scheduling (and gang scheduling)
Autoscale up and shrink policy management
Effectively run batch workloads and services with clear SLA’s
In summary, we are improving core scheduling capabilities to manage both K8s and YARN cluster which is cloud aware as a separate initiative and above-mentioned cases will be the core focus of this initiative. More details of our works will be presented in this talk.
Log System As Backbone – How We Built the World’s Most Advanced Vector Databa...StreamNative
Milvus is an open-source vector database that leverages a novel data fabric to build and manage vector similarity search applications. As the world's most popular vector database, it has already been adopted in production by thousands of companies around the world, including Lucidworks, Shutterstock, and Cloudinary. With the launch of Milvus 2.0, the community aims to introduce a cloud-native, highly scalable and extendable vector similarity solution, and the key design concept is log as data.
Milvus relies on Pulsar as the log pub/sub system. Pulsar helps Milvus to reduce system complexity by loosely decoupling each micro service, making the system stateless by disaggregating log storage and computation, which also makes the system further extendable. We will introduce the overview design, the implementation details of Milvus and its roadmap in this topic.
Takeaways:
1) Get a general idea about what is a vector database and its real-world use cases.
2) Understand the major design principles of Milvus 2.0.
3) Learn how to build a complex system with the help of a modern log system like Pulsar.
Common issues with Apache Kafka® Producerconfluent
Badai Aqrandista, Confluent, Senior Technical Support Engineer
This session will be about a common issue in the Kafka Producer: producer batch expiry. We will be discussing the Kafka Producer internals, its common causes, such as a slow network or small batching, and how to overcome them. We will also be sharing some examples along the way!
https://www.meetup.com/apache-kafka-sydney/events/279651982/
Slidedeck presented at http://devternity.com/ around MongoDB internals. We review the usage patterns of MongoDB, the different storage engines and persistency models as well has the definition of documents and general data structures.
Benefits of Stream Processing and Apache Kafka Use Casesconfluent
Watch this talk here: https://www.confluent.io/online-talks/benefits-of-stream-processing-and-apache-kafka-use-cases-on-demand
This talk explains how companies are using event-driven architecture to transform their business and how Apache Kafka serves as the foundation for streaming data applications.
Learn how major players in the market are using Kafka in a wide range of use cases such as microservices, IoT and edge computing, core banking and fraud detection, cyber data collection and dissemination, ESB replacement, data pipelining, ecommerce, mainframe offloading and more.
Also discussed in this talk are the differences between Apache Kafka and Confluent Platform.
This session is part 1 of 4 in our Fundamentals for Apache Kafka series.
Jane Uyvova
Senior Solutions Architect, MongoDB
March 21, 2017
MongoDB Evenings San Francisco
Learn how easy it is to set up, operate, and scale your MongoDB deployments in the cloud with MongoDB Atlas.
Debezium is a Kafka Connect plugin that performs Change Data Capture from your database into Kafka. This talk demonstrates how this can be leveraged to move your data from one database platform such as MySQL to PostgreSQL. A working example is available on GitHub (github.com/gh-mlfowler/debezium-demo).
Watch this talk here: https://www.confluent.io/online-talks/apache-kafka-architecture-and-fundamentals-explained-on-demand
This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
-Topics, partitions and segments
-The commit log and streams
-Brokers and broker replication
-Producer basics
-Consumers, consumer groups and offsets
This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
In recent years Zalando has adopted a decentralized setup for applications and databases. This has impacted our database engineers by transferring responsibility to small teams, each of which manages its own infrastructure. Decentralization is great for team autonomy, but can present challenges in terms of how to easily manage lots of PostgreSQL clusters. That’s why our team created Spilo: an open source HA-cluster (highly available PostgreSQL cluster). This talk will show how Spilo simplifies Postgres cluster management while preserving team autonomy. By building upon streaming replication, Spilo provides a set of clusters that require no human interaction for many administration tasks and most failure scenarios; takes care of managing the number of servers (adding and removing them); and creates backups. It implements our own version of Patroni (https://github.com/zalando/patroni): a process, derived from Compose’s Governor, that governs the Postgres cluster (promoting and demoting) and updates information in etcd (the distributed consensus key/value store created by CoreOS). I’ll explore the architecture of Patroni implemented with Spilo; a live demo will show some failovers as they occur. Finally, I’ll show how Spilo combines Patroni with cloud infrastructure architecture components (for example, AWS), adding autoscaling to run a HA-cluster and allowing AWS power users to create a new HA-cluster with very little effort. Spilo relies upon STUPS, Zalando’s open-source platform as a service (PaaS) for enabling multiple, autonomous teams to use AWS while remaining audit-compliant. By using Spilo and STUPS together, our engineers can create a new HA-cluster with just a few commands. After attending this talk the audience will understand how they can also use Spilo, Patroni and STUPS to manage their Postgres clusters more efficiently while working autonomously.
Run Apache Spark on Kubernetes in Large Scale_ Challenges and Solutions-2.pdfAnya Bida
Speaker: Bo Yang
Summary: More and more people are running Apache Spark on Kubernetes due to the popularity of Kubernetes. There are a lot of challenges since Spark was not originally designed for Kubernetes, for example, easily submitting/managing applications, accessing Spark UI, allocating resource queues based on cpu/memory, and etc. This talk will present how to address these challenges and provide Spark As Service in a large scale.
Apache Gobblin: Bridging Batch and Streaming Data Integration. Big Data Meetu...Shirshanka Das
This talk describes the motivations behind Apache Gobblin (incubating), architecture, latest innovations in supporting both batch and streaming data pipelines as well as future roadmap.
Reactive Microservices with Spring 5: WebFlux Trayan Iliev
On November 27 Trayan Iliev from IPT presented “Reactive microservices with Spring 5: WebFlux” @Dev.bg in Betahaus Sofia. IPT – Intellectual Products & Technologies has been organizing Java & JavaScript trainings since 2003.
Spring 5 introduces a new model for end-to-end functional and reactive web service programming with Spring 5 WebFlow, Spring Data & Spring Boot. The main topics include:
– Introduction to reactive programming, Reactive Streams specification, and project Reactor (as WebFlux infrastructure)
– REST services with WebFlux – comparison between annotation-based and functional reactive programming approaches for building.
– Router, handler and filter functions
– Using reactive repositories and reactive database access with Spring Data. Building end-to-end non-blocking reactive web services using Netty-based web runtime
– Reactive WebClients and integration testing. Reactive WebSocket support
– Realtime event streaming to WebClients using JSON Streams, and to JS client using SSE.
PostgreSQL Replication High Availability MethodsMydbops
This slides illustrates the need for replication in PostgreSQL, why do you need a replication DB topology, terminologies, replication nodes and many more.
Next Generation Scheduling for YARN and K8s: For Hybrid Cloud/On-prem Environ...DataWorks Summit
Scheduler of a container orchestration system, such as YARN and K8s, is a critical component that users rely on to plan resources and manage applications.
And if we assess where we are today, in YARN effectively it had two power schedulers (Fair and Capacity scheduler) and both serve many strong use cases in big data ecosystem. It can scale up to 50k nodes per cluster, and schedule 20k containers per second, and extremely efficient to manage batch workloads.
K8s default scheduler is an industry-proven solution to efficiently manage long-running services. As more big data apps are moving to K8s and cloud world, but many features like hierarchical queues to support multi-tenancy better, fairness resource sharing, and preemption, etc. are either missing or not mature enough at this point of time to support big data apps running on K8s.
At this point, there is no solution that exists to address the needs of having a unified resource scheduling experiences across platforms. That makes it extremely difficult to manage workloads running on different environments, from on-premise to cloud.
Hence evolving a common scheduler powered from YARN and K8s’s legacy capabilities and improving towards cloud use cases will focus more on use cases like:
Better bin-packing scheduling (and gang scheduling)
Autoscale up and shrink policy management
Effectively run batch workloads and services with clear SLA’s
In summary, we are improving core scheduling capabilities to manage both K8s and YARN cluster which is cloud aware as a separate initiative and above-mentioned cases will be the core focus of this initiative. More details of our works will be presented in this talk.
Log System As Backbone – How We Built the World’s Most Advanced Vector Databa...StreamNative
Milvus is an open-source vector database that leverages a novel data fabric to build and manage vector similarity search applications. As the world's most popular vector database, it has already been adopted in production by thousands of companies around the world, including Lucidworks, Shutterstock, and Cloudinary. With the launch of Milvus 2.0, the community aims to introduce a cloud-native, highly scalable and extendable vector similarity solution, and the key design concept is log as data.
Milvus relies on Pulsar as the log pub/sub system. Pulsar helps Milvus to reduce system complexity by loosely decoupling each micro service, making the system stateless by disaggregating log storage and computation, which also makes the system further extendable. We will introduce the overview design, the implementation details of Milvus and its roadmap in this topic.
Takeaways:
1) Get a general idea about what is a vector database and its real-world use cases.
2) Understand the major design principles of Milvus 2.0.
3) Learn how to build a complex system with the help of a modern log system like Pulsar.
Common issues with Apache Kafka® Producerconfluent
Badai Aqrandista, Confluent, Senior Technical Support Engineer
This session will be about a common issue in the Kafka Producer: producer batch expiry. We will be discussing the Kafka Producer internals, its common causes, such as a slow network or small batching, and how to overcome them. We will also be sharing some examples along the way!
https://www.meetup.com/apache-kafka-sydney/events/279651982/
Slidedeck presented at http://devternity.com/ around MongoDB internals. We review the usage patterns of MongoDB, the different storage engines and persistency models as well has the definition of documents and general data structures.
Benefits of Stream Processing and Apache Kafka Use Casesconfluent
Watch this talk here: https://www.confluent.io/online-talks/benefits-of-stream-processing-and-apache-kafka-use-cases-on-demand
This talk explains how companies are using event-driven architecture to transform their business and how Apache Kafka serves as the foundation for streaming data applications.
Learn how major players in the market are using Kafka in a wide range of use cases such as microservices, IoT and edge computing, core banking and fraud detection, cyber data collection and dissemination, ESB replacement, data pipelining, ecommerce, mainframe offloading and more.
Also discussed in this talk are the differences between Apache Kafka and Confluent Platform.
This session is part 1 of 4 in our Fundamentals for Apache Kafka series.
Jane Uyvova
Senior Solutions Architect, MongoDB
March 21, 2017
MongoDB Evenings San Francisco
Learn how easy it is to set up, operate, and scale your MongoDB deployments in the cloud with MongoDB Atlas.
MongoDB Aggregations Indexing and ProfilingManish Kapoor
This deck consists of following operations in MongoDB: aggregation through aggregation pipeline, map reduce, operations, indexes and profiling of slow queries.
Tanel Poder Oracle Scripts and Tools (2010)Tanel Poder
Tanel Poder's Oracle Performance and Troubleshooting Scripts & Tools presentation initially presented at Hotsos Symposium Training Day back in year 2010
Born of the need to create the perfectly dynamic system able to withstand the most creative of sales pitches thrown at it this talk will be about what lead me onto the path of Mongo and then using it to create almost anything from 100s of Facebook applications to a social media sentiment ranking system used by some of the biggest companies in the world.
http://www.meetup.com/Meteor-Singapore/events/221025182/
IBM Digital Experience Theme CustomizationVan Staub, MBA
This presentation is from IBM's New Way to Learn 2016 partner enablement. The topic is an introduction to theme customization in WebSphere Portal aka IBM Digital Experience.
Alexey Zinoviev presented this paper on the Joker'15 conference http://jokerconf.com/talks/zynovyev/
This paper covers next topics: Java, Morphia, Hibernate OGM, JPA, Spring Data, Kundera, NoSQL, Mongo, Jongo
MongoDb scalability and high availability with Replica-SetVivek Parihar
One of the much awaited features in MongoDB 1.6 is replica sets, MongoDB replication solution providing automatic failover and recovery.
MongoDB High Availabiltity with Replica Sets
This talk will cover -
• What is Replica Set?
• Replication Process
• Advantaged of Replica Set vs master/slave
• How to set up replica set on production Demo
This video is tutorial for setting up the MongoDb replica-set ion production environment. In this i took 3 instances which have already mongo installed and running. This tutorial consists-:
1.Setup the each instance of replica set
2.modify the mongodb.conf to include replica set information
3.configure the servers to include in replica set
4.then cross checking if we kill one primary then secondary becomes primary or not.
Understand what NoSQL is and what it is not. Why would you want to use NoSQL within your project and which NoSQL database would you utilize. Explore the relationships between NoSQL and RDBMS. Understand how to select between an RDBMs (MySQL and PostgreSQL), Document Database(MongoDB), Key-Value Store, Graph Database, and Columnar databases or combinations of the above.
Understand what NoSQL is and what it is not. Why would you want to use NoSQL within your project and which NoSQL database would you utilize. Explore the relationships between NoSQL and RDBMS. Understand how to select between an RDBMs (MySQL and PostgreSQL), Document Database(MongoDB), Key-Value Store, Graph Database, and Columnar databases or combinations of the above.
Agenda:
MongoDB Overview/History
Workshop
1. How to perform operations to MongoDB – Workshop
2. Using MongoDB in your Java application
Advance usage of MongoDB
1. Performance measurement comparison – real life use cases
3. Doing Cluster setup
4. Cons of MongoDB with other document oriented DB
5. Map-reduce/ Aggregation overview
Workshop prerequisite
1. All participants must bring their laptops.
2. https://github.com/geek007/mongdb-examples
3. Software prerequisite
a. Java version 1.6+
b. Your favorite IDE, Preferred http://www.jetbrains.com/idea/download/
c. MongoDB server version – 2.6.3 (http://www.mongodb.org/downloads - 64 bit version)
d. Participants can install MongoDB client – http://robomongo.org/
About Speaker:
Akbar Gadhiya is working with Ishi Systems as Programmer Analyst. Previously he worked with PMC, Baroda and HCL Technologies.
MongoDB .local London 2019: Managing Diverse User Needs with MongoDB and SQLMongoDB
Data administrators face the challenge of integrating disparate data technologies into a cohesive and performant data platform. This is especially true when using diverse query languages and protocols. This session will focus on how to integrate SQL-aware applications into a MongoDB data platform.
MongoDB.local DC 2018: Tutorial - Data Analytics with MongoDBMongoDB
Data analytics can offer insights into your business and help take it to the next level. In this talk you'll learn about MongoDB tools for building visualizations, dashboards and interacting with your data. We'll start with exploratory data analysis using MongoDB Compass. Then, in a matter of minutes, we'll take you from 0 to 1 - connecting to your Atlas cluster via BI Connector and running analytical queries against it in Microsoft Excel. We'll also showcase the new MongoDB Charts product and you'll see how quick, easy and intuitive analytics can be on the MongoDB platform without flattening the data or spending time and effort on complicated and fragile ETL.
Data analytics can offer insights into your business and help take it to the next level. In this talk you'll learn about MongoDB tools for building visualizations, dashboards and interacting with your data. We'll start with exploratory data analysis using MongoDB Compass.
One of the challenges that comes with moving to MongoDB is figuring how to best model your data. While most developers have internalized the rules of thumb for designing schemas for RDBMSs, these rules don't always apply to MongoDB. The simple fact that documents can represent rich, schema-free data structures means that we have a lot of viable alternatives to the standard, normalized, relational model. Not only that, MongoDB has several unique features, such as atomic updates and indexed array keys, that greatly influence the kinds of schemas that make sense. Understandably, this begets good questions: Are foreign keys permissible, or is it better to represent one-to-many relations withing a single document? Are join tables necessary, or is there another technique for building out many-to-many relationships? What level of denormalization is appropriate? How do my data modeling decisions affect the efficiency of updates and queries? In this session, we'll answer these questions and more, provide a number of data modeling rules of thumb, and discuss the tradeoffs of various data modeling strategies.
Solutions for bi-directional Integration between Oracle RDMBS & Apache KafkaGuido Schmutz
A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Today’s enterprises have their core systems often implemented on top of relational databases, such as the Oracle RDBMS. Implementing a new solution supporting the digital strategy using Kafka and the ecosystem can not always be done completely separate from the traditional legacy solutions. Often streaming data has to be enriched with state data which is held in an RDBMS of a legacy application. It’s important to cache this data in the stream processing solution, so that It can be efficiently joined to the data stream. But how do we make sure that the cache is kept up-to-date, if the source data changes? We can either poll for changes from Kafka using Kafka Connect or let the RDBMS push the data changes to Kafka. But what about writing data back to the legacy application, i.e. an anomaly is detected inside the stream processing solution which should trigger an action inside the legacy application. Using Kafka Connect we can write to a database table or view, which could trigger the action. But this not always the best option. If you have an Oracle RDBMS, there are many other ways to integrate the database with Kafka, such as Advanced Queueing (message broker in the database), CDC through Golden Gate or Debezium, Oracle REST Database Service (ORDS) and more. In this session, we present various blueprints for integrating an Oracle RDBMS with Apache Kafka in both directions and discuss how these blueprints can be implemented using the products mentioned before.
Solutions for bi-directional integration between Oracle RDBMS and Apache Kafk...confluent
A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Today's enterprises have their core systems often implemented on top of relational databases, such as the Oracle RDBMS. Implementing a new solution supporting the digital strategy using Kafka and the ecosystem can not always be done completely separate from the traditional legacy solutions. Often streaming data has to be enriched with state data which is held in an RDBMS of a legacy application. It's important to cache this data in the stream processing solution, so that It can be efficiently joined to the data stream. But how do we make sure that the cache is kept up-to-date, if the source data changes? We can either poll for changes from Kafka using Kafka Connect or let the RDBMS push the data changes to Kafka. But what about writing data back to the legacy application, i.e. an anomaly is detected inside the stream processing solution which should trigger an action inside the legacy application. Using Kafka Connect we can write to a database table or view, which could trigger the action. But this not always the best option. If you have an Oracle RDBMS, there are many other ways to integrate the database with Kafka, such as Advanced Queueing (message broker in the database), CDC through Golden Gate or Debezium, Oracle REST Database Service (ORDS) and more. In this session, we present various blueprints for integrating an Oracle RDBMS with Apache Kafka in both directions and discuss how these blueprints can be implemented using the products mentioned before.
Solutions for bi-directional integration between Oracle RDBMS & Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform. A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Data sources flowing into Kafka are often native data streams such as social media streams, telemetry data, financial transactions and many others. But these data stream only contain part of the information. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. To implement new and modern, real-time solutions, an up-to-date view of that information is needed. So how do we make sure that information can flow between the RDBMS and Kafka, so that changes are available in Kafka as soon as possible in near-real-time? This session will present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate and bridging Kafka with Oracle Advanced Queuing (AQ).
Jumpstart: BI Connector & Tableau
Speaker: Ronan Bohan, Solutions Architect, MongoDB
Speaker: Viady Krishnan
Level: 100 (Beginner)
Track: Jumpstart
Get started with the BI connector and Tableau in this introductory session. We will give you insight into how you can view your MongoDB data in traditional BI tools and an overview of connecting Tableau with MongoDB. After attending this session, students should be able connect their analytics tool of choice to a MongoDB data store using the BI connector, secure their client connection, and know how to enable authentication. Audience members should be familiar with analytics tools like Tableau to do business analytics, and know how to set up and run analytics in a BI tool. This session will use Tableau as an example.
This is a Jumpstart session, held before the keynotes, designed to give you an overview of MongoDB basics so you can dive into more advanced technical sessions later in the day.
What You Will Learn:
- How to connect your analytics tool of choice to a MongoDB data store using the BI connector.
- How to view MongoDB data in Tableau or another BI tool.
- How to secure your client connection to MongoDB.
An Introduction to MongoDB
MongoDB is a rapidly growing noSQL database solution used by many major companies and favored by startups for its simple design and setup. Its greatest advantage is the ability to use the unstructured data type, BSON, which allows developers to avoid writing, maintaining and upgrading schema. Unlike some SQL options, it has scaling and redundancy built-in and allows for quick ad-hoc querying of any data. This talk will go over the basics of data insertion/retrieval, compare querying between SQL and MongoDB, work through some advanced queries like map/reduce and aggregation, and look at the more sophisticated features from a 10,000 ft view.
Bio: Matt Warren is a graduate of Northeastern University and has been working in Systems Administration and Engineering for 8 years. His career started at Smarter Travel Media and most recently he is working for Boston-based ground travel startup, Wanderu. He specializes in scaling solutions to handle the traffic loads of high performance, e-commerce websites. In September of 2015, he became organizer of the Boston MongoDB Meetup.
Event: http://w3.abcd.harvard.edu/ai1ec_event/intro-to-mongodb/
Webinar: General Technical Overview of MongoDB for Dev TeamsMongoDB
In this talk we will focus on several of the reasons why developers have come to love the richness, flexibility, and ease of use that MongoDB provides. First we will give a brief introduction of MongoDB, comparing and contrasting it to the traditional relational database. Next, we’ll give an overview of the APIs and tools that are part of the MongoDB ecosystem. Then we’ll look at how MongoDB CRUD (Create, Read, Update, Delete) operations work, and also explore query, update, and projection operators. Finally, we will discuss MongoDB indexes and look at some examples of how indexes are used.
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB
When you need to model data, is your first instinct to start breaking it down into rows and columns? Mine used to be too. When you want to develop apps in a modern, agile way, NoSQL databases can be the best option. Come to this talk to learn how to take advantage of all that NoSQL databases have to offer and discover the benefits of changing your mindset from the legacy, tabular way of modeling data. We’ll compare and contrast the terms and concepts in SQL databases and MongoDB, explain the benefits of using MongoDB compared to SQL databases, and walk through data modeling basics so you feel confident as you begin using MongoDB.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
2. MONGODB WORKSHOP
{
meetup: “NYC Open Data”,
presenters: [“Kannan Sankaran”, “Roman Kubiak”],
host: “Vivian is awesome, THANK YOU”,
location: “ThoughtWorks is awesome, THANK YOU”,
audience: “You guys are awesome, THANK YOU”
}
3. OUR TOPICS
OVERVIEW OF DATABASES
WHAT IS MONGODB?
MONGODB, NOSQL, AND RELATIONAL DATABASES
A PEEK AT MONGODB COMMANDS
SHARDING AND REPLICATION IN MONGODB
FUTURE OF MONGODB AND US
DEMO
WORKSHOP
9. DATABASES AND THEIR GROWTH
RELATIONAL
DATABASES
(RDBMS) CREATED
1970s
1980s
RDBMS CONTINUE
TO BE POPULAR
INTERNET ARRIVES
1990s
CLIENT/SERVER MODEL
STRUCTURED QUERY
LANGUAGE (SQL)
CREATED
2000s
MONGODB
CREATED
2007
INTERNET GROWS
NoSQL DATABASES
EMERGE
17. A DOCUMENT IS LIKE A ROW…
{
_id: ObjectID(“12AB34CD56EF”),
name: “Ed Brown”,
orderDate: “2-1-2014”
}
18. …BUT IT IS MORE FLEXIBLE
{
{
_id: ObjectID(“12AB34CD56EF”),
name: “Ed Brown”,
orderDate: “2-1-2014”,
payments:
{
car: “100.50”,
hotel: “200”
}
_id: ObjectID(“12AB34CD56EF”),
name: “Ed Brown”,
orderDate: “2-1-2014”,
payments:
{
car: “100.50”,
hotel: “200”
},
tags: [“shirt”, “tie”]
}
}
THAT LOOKS LIKE A DOCUMENT
WITHIN ANOTHER DOCUMENT!
WHAT IS THIS? MULTIPLE VALUES
WITHIN A COLUMN?
19. HOW LARGE CAN THIS DOCUMENT BE?
{
_id: ObjectID(“12AB34CD56EF”),
name: “Ed Brown”,
orderDate: “2-1-2014”,
payments:
{
car: “100.50”,
hotel: “200”
}
…
…
…
}
UP TO 16 MB
LEO TOLSTOY’S 1225PAGE BOOK ON WAR
AND PEACE CAN FIT IN
1 DOCUMENT, AS IT IS
ONLY AROUND 3 MB.
27. MONGODB FEATURES
EASY TO LEARN
DYNAMIC QUERY LANGUAGE
- SEARCH BY FIELDS, REGULAR EXPRESSIONS
- USER-DEFINED JAVASCRIPT FUNCTIONS
- AGGREGATION, INCLUDING MAP/REDUCE
INDEXING – SINGLE, COMPOUND, GEOSPATIAL
REPLICATION
LOAD BALANCING USING SHARDING
GRIDFS TO STORE FILES
30. DATABASE MANAGEMENT SYSTEMS
BERKELEY INGRES
ORACLE
1970s
MOST SYSTEMS
USE SOME
FLAVOR OF SQL
1980s
INFORMIX
DB2
SYBASE
SQL SERVER
MS ACCESS
POSTGRESQL
MYSQL
1990s
2000s
NETEZZA
GREENPLUM
VERTICA
MARIADB
MONGODB
2007
33. IN THE LATE 90s/EARLY 2000s…
DOT COM BUBBLE
DOT COM BUST
WEB SERVICES
SOCIAL NETWORKS
GOOGLE, AMAZON
COMPUTER OWNERS/USERS
WEBSITE DATA COLLECTION
DATABASE SIZES
35. SCALE UP
BIGGER
MACHINE
MORE DISK SPACE
MORE RAM
MORE PROCESSORS
MORE EXPENSIVE
SINGLE POINT OF FAILURE
HARDWARE HAS LIMITS!
SCALE
OUT
SMALLER
LESS DISK SPACE
MACHINES
LESS RAM
LESS PROCESSORS
LESS EXPENSIVE
NO SINGLE POINT OF FAILURE
HIGHER RELIABILITY DESPITE
FAILURE OF INDIVIDUAL MACHINES
39. WP_POSTS
A JOIN QUERY IN MYSQL
WP_COMMENTS
SELECT p.post_author,
p.post_date,
c.comment_author,
c.comment_date
FROM wp_posts AS p
INNER JOIN wp_comments AS c
ON p.ID = c.comment_post_ID
WHERE p.ID = 1;
60. WHEN NOT TO USE MONGODB
IF TRANSACTIONS ARE A MUST
IF JOINS ARE ABSOLUTELY NECESSARY
SOFTWARE PRODUCTS LIKE WORDPRESS
THAT ALREADY HAVE TONS OF SUPPORT
FOR RELATIONAL DATABASES
61. FOR MONGODB vs MYSQL
ARGUMENTS, WATCH…
Source: http://www.youtube.com/watch?v=b2F-DItXtZs
64. MONGODB FEATURES
EASY TO LEARN
DYNAMIC QUERY LANGUAGE
- SEARCH BY FIELDS, REGULAR EXPRESSIONS
- USER-DEFINED JAVASCRIPT FUNCTIONS
- AGGREGATION, INCLUDING MAP/REDUCE
INDEXING – SINGLE, COMPOUND, GEOSPATIAL
REPLICATION
LOAD BALANCING USING SHARDING
GRIDFS TO STORE FILES
77. READ SPECIFIC FIELDS IN DOCUMENT
SQL
SELECT id, Name FROM ParksNYC
MONGODB
db.ParksNYC.find(
{ },
{
_id: 1, Name: 1
}
)
78. READ DOCUMENTS WITH RANGE CRITERIA
SQL
SELECT id, Name FROM ParksNYC
WHERE Courts > 5
AND Courts <= 8
MONGODB
db.ParksNYC.find(
{
Courts: { $gt: 5, $lte: 8}
}
)
79. READ DOCUMENTS THAT START WITH
A LETTER (REGULAR EXPRESSION)
SQL
SELECT id, Name FROM ParksNYC
WHERE NAME LIKE ‘F%’
MONGODB
db.ParksNYC.find(
{
Name: /^F/
}
)
80. UPDATE FIELD IN DOCUMENT
SQL
UPDATE ParksNYC
SET VisitDate = ‘1/1/2014’
MONGODB
db.ParksNYC.update(
{ },
{
$set: { VisitDate: "1/1/2014" }
},
{ multi: true}
)
82. GROUP BY AND SUM
SQL
SELECT COUNT(Name) AS
Parks_Number,
SUM(Courts) AS Courts_Number
FROM ParksNYC
GROUP BY Accessible
MONGODB
db.ParksNYC.aggregate(
{ $group :
{
_id : "$Accessible",
Parks_Number : { $sum : 1 },
Courts_Number :
{ $sum : "$Courts" }
}
})
88. SHARDING STEPS
1. ENABLE SHARDING ON DATABASE.
2. PICK A SHARD KEY FROM THE COLLECTION.
MAKE SURE THE KEY IS
- INDEXED
- SUFFICIENTLY UNIQUE SO IT WILL HAVE
A VARIETY OF UNIQUE VALUES.
3. SIT BACK AND RELAX. MONGODB WILL
AUTOMATICALLY DO THE SHARDING.
91. BREAKING THE RANGE INTO CHUNKS
SHARD0000
MONGOD
$minKey
Abba1234
RobG
TimA
LambW
RobF
SHARD0001
MONGOD
TimB
$maxKey
MONGOS
CarlZ
FrankT
MONGOD SHARD0002
CLIENT
FrankY
JackA
Abba1235
CarlW
JackB
LambV
92. BENEFITS OF SHARDING
1.
2.
3.
4.
INCREASES AVAILABLE MEMORY.
REDUCES LOAD ON THE SERVER.
INCREASES HARD DISK SPACE.
LOCATION-BASED SHARD KEYS CAN PUT DATA
CLOSE TO THE USERS AND KEEP RELATED DATA
TOGETHER.
104. AND THANK YOU TO EVERYONE WHO HELPED US
DR. BILL HOWE, UNIVERSITY OF WASHINGTON
JASON CHEN, MONGODB RECRUITER
KRISTINA CHODOROW (DEFINITIVE GUIDE AUTHOR)
FRANCESCA KRIHELY (MONGODB COMMUNITY MANAGER)
DR. MARKUS SCHMIDBERGER, RMONGODB
JOHANNES BRANDSTETTER, MONGOSOUP
(THE FIRST EUROPEAN PARTNER OF MONGODB TO PROVIDE
MONGODB AS A SERVICE)
DR. RAMNATH VAIDYANATHAN, RCHARTS
105. REFERENCES
MongoDB
http://www.mongodb.org
Book: MongoDB, The Definitive Guide – Kristina Chodorow
Book: NoSQL Distilled – Pramod J. Sadalage and Martin Fowler
NoSQL
http://en.wikipedia.org/wiki/NoSQL
MongoDB Use Cases
http://www.mongodb.com/use-cases
First NoSQL Meetup Notes
http://developer.yahoo.com/blogs/ydn/notes-nosql-meetup7663.html
Billion dollar club
http://graphics.wsj.com/billion-dollar-club/
Photos from Google