This document discusses how Tableau and MongoDB can work together for visual analytics of big data. It describes how MongoDB is a NoSQL database that can handle unstructured and semi-structured data like JSON, and how Tableau allows users to connect to MongoDB through an ODBC driver and visualize the data without needing to write code. The document outlines scenarios where big data comes from human, machine, and process sources and how the combination of Tableau and MongoDB's schema-on-read approach reduces the need for ETL. It also previews demos of connecting Tableau to MongoDB using both the ODBC driver and a PostgreSQL interface.
Teradata QueryGrid to MongoDB Lightning IntroductionMongoDB
<b>Teradata QueryGrid to MongoDB Lightning Introduction </b> [2:10 pm - 2:30 pm]<br />This is where SQL and NoSQL work together. This session demonstrates the joining MongoDB documents with data warehouse tables to perform new levels of analytics. Seamless self-service dataaccess will be accomplished via a simple SQL JSON notation from Teradata to MongoDB. Now, no more time and effort will be required to co-locate data from both platforms in order to analyze it! Using theTeradata QueryGrid connector to MongoDB enables users to access data on two systems transparently in a self-service manner. This session introduces Teradata’s new capability.
Webinar: Introducing the MongoDB Connector for BI 2.0 with TableauMongoDB
Pairing your real-time operational data stored in a modern database like MongoDB with first-class business intelligence platforms like Tableau enables new insights to be discovered faster than ever before.
Many leading organizations already use MongoDB in conjunction with Tableau including a top American investment bank and the world’s largest airline. With the Connector for BI 2.0, it’s never been easier to streamline the connection process between these two systems.
In this webinar, we will create a live connection from Tableau Desktop to a MongoDB cluster using the Connector for BI. Once we have Tableau Desktop and MongoDB connected, we will demonstrate the visual power of Tableau to explore the agile data storage of MongoDB.
You’ll walk away knowing:
- How to configure MongoDB with Tableau using the updated connector
- Best practices for working with documents in a BI environment
- How leading companies are using big data visualization strategies to transform their businesses
Webinar: Enterprise Data Management in the Era of MongoDB and Data LakesMongoDB
With so much talk of how Big Data is revolutionizing the world and how a data lake with Hadoop and/or Spark will solve all your data problems, it is hard to tell what is hype, reality, or somewhere in-between.
In working with dozens of enterprises in varying stages of their enterprise data management (EDM) strategy, MongoDB enterprise architect, Matt Kalan, sees the same challenges and misunderstandings arise again and again.
In this session, he will explain common challenges in data management, what capabilities are necessary, and what the future state of architecture looks like. MongoDB is uniquely capable of filling common gaps in the data lake strategy.
This session also includes a live Q&A portion during which you are encouraged to ask questions of our team.
Webinar: Enterprise Trends for Database-as-a-ServiceMongoDB
Two complementary trends are particularly strong in enterprise IT today: MongoDB itself, and the movement of infrastructure, platform, and software to as-a-service models. Being designed from the start to work in cloud deployments, MongoDB is a natural fit.
Learn how your enterprise can create its own MongoDB service offering, combining the advantages of MongoDB and cloud for agile, nearly-instantaneous deployments. Ease your operations workload by centralizing your points for enforcement, standardize best policies, and enable elastic scalability.
We will provide you with an enterprise planning outline which incorporates needs and value for stakeholders across operations, development, and business. We will cover accounting, chargeback integration, and quantification of benefits to the enterprise (such as standardizing best practices, creating elastic architecture, and reducing database maintenance costs).
During this presentation, Infusion and MongoDB shared their mainframe optimization experiences and best practices. These have been gained from working with a variety of organizations, including a case study from one of the world’s largest banks. MongoDB and Infusion bring a tested approach that provides a new way of modernizing mainframe applications, while keeping pace with the demand for new digital services.
Modern architectures are moving away from a "one size fits all" approach. We are well aware that we need to use the best tools for the job. Given the large selection of options available today, chances are that you will end up managing data in MongoDB for your operational workload and with Spark for your high speed data processing needs.
Description: When we model documents or data structures there are some key aspects that need to be examined not only for functional and architectural purposes but also to take into consideration the distribution of data nodes, streaming capabilities, aggregation and queryability options and how we can integrate the different data processing software, like Spark, that can benefit from subtle but substantial model changes. A clear example is when embedding or referencing documents and their implications on high speed processing.
Over the course of this talk we will detail the benefits of a good document model for the operational workload. As well as what type of transformations we should incorporate in our document model to adjust for the high speed processing capabilities of Spark.
We will look into the different options that we have to connect these two different systems, how to model according to different workloads, what kind of operators we need to be aware of for top performance and what kind of design and architectures we should put in place to make sure that all of these systems work well together.
Over the course of the talk we will showcase different libraries that enable the integration between spark and MongoDB, such as MongoDB Hadoop Connector, Stratio Connector and MongoDB Spark Native Connector.
By the end of the talk I expect the attendees to have an understanding of:
How they connect their MongoDB clusters with Spark
Which use cases show a net benefit for connecting these two systems
What kind of architecture design should be considered for making the most of Spark + MongoDB
How documents can be modeled for better performance and operational process, while processing these data sets stored in MongoDB.
The talk is suitable for:
Developers that want to understand how to leverage Spark
Architects that want to integrate their existing MongoDB cluster and have real time high speed processing needs
Data scientists that know about Spark, are playing with Spark and want to integrate with MongoDB for their persistency layer
MongoDB San Francisco 2013: Storing eBay's Media Metadata on MongoDB present...MongoDB
This session will be a case study of eBay’s experience running MongoDB for project Zoom, in which eBay stores all media metadata for the site. This includes references to pictures of every item for sale on eBay. This cluster is eBay's first MongoDB installation on the platform and is a mission critical application. Yuri Finkelstein, an Enterprise Architect on the team, will provide a technical overview of the project and its underlying architecture.
Teradata QueryGrid to MongoDB Lightning IntroductionMongoDB
<b>Teradata QueryGrid to MongoDB Lightning Introduction </b> [2:10 pm - 2:30 pm]<br />This is where SQL and NoSQL work together. This session demonstrates the joining MongoDB documents with data warehouse tables to perform new levels of analytics. Seamless self-service dataaccess will be accomplished via a simple SQL JSON notation from Teradata to MongoDB. Now, no more time and effort will be required to co-locate data from both platforms in order to analyze it! Using theTeradata QueryGrid connector to MongoDB enables users to access data on two systems transparently in a self-service manner. This session introduces Teradata’s new capability.
Webinar: Introducing the MongoDB Connector for BI 2.0 with TableauMongoDB
Pairing your real-time operational data stored in a modern database like MongoDB with first-class business intelligence platforms like Tableau enables new insights to be discovered faster than ever before.
Many leading organizations already use MongoDB in conjunction with Tableau including a top American investment bank and the world’s largest airline. With the Connector for BI 2.0, it’s never been easier to streamline the connection process between these two systems.
In this webinar, we will create a live connection from Tableau Desktop to a MongoDB cluster using the Connector for BI. Once we have Tableau Desktop and MongoDB connected, we will demonstrate the visual power of Tableau to explore the agile data storage of MongoDB.
You’ll walk away knowing:
- How to configure MongoDB with Tableau using the updated connector
- Best practices for working with documents in a BI environment
- How leading companies are using big data visualization strategies to transform their businesses
Webinar: Enterprise Data Management in the Era of MongoDB and Data LakesMongoDB
With so much talk of how Big Data is revolutionizing the world and how a data lake with Hadoop and/or Spark will solve all your data problems, it is hard to tell what is hype, reality, or somewhere in-between.
In working with dozens of enterprises in varying stages of their enterprise data management (EDM) strategy, MongoDB enterprise architect, Matt Kalan, sees the same challenges and misunderstandings arise again and again.
In this session, he will explain common challenges in data management, what capabilities are necessary, and what the future state of architecture looks like. MongoDB is uniquely capable of filling common gaps in the data lake strategy.
This session also includes a live Q&A portion during which you are encouraged to ask questions of our team.
Webinar: Enterprise Trends for Database-as-a-ServiceMongoDB
Two complementary trends are particularly strong in enterprise IT today: MongoDB itself, and the movement of infrastructure, platform, and software to as-a-service models. Being designed from the start to work in cloud deployments, MongoDB is a natural fit.
Learn how your enterprise can create its own MongoDB service offering, combining the advantages of MongoDB and cloud for agile, nearly-instantaneous deployments. Ease your operations workload by centralizing your points for enforcement, standardize best policies, and enable elastic scalability.
We will provide you with an enterprise planning outline which incorporates needs and value for stakeholders across operations, development, and business. We will cover accounting, chargeback integration, and quantification of benefits to the enterprise (such as standardizing best practices, creating elastic architecture, and reducing database maintenance costs).
During this presentation, Infusion and MongoDB shared their mainframe optimization experiences and best practices. These have been gained from working with a variety of organizations, including a case study from one of the world’s largest banks. MongoDB and Infusion bring a tested approach that provides a new way of modernizing mainframe applications, while keeping pace with the demand for new digital services.
Modern architectures are moving away from a "one size fits all" approach. We are well aware that we need to use the best tools for the job. Given the large selection of options available today, chances are that you will end up managing data in MongoDB for your operational workload and with Spark for your high speed data processing needs.
Description: When we model documents or data structures there are some key aspects that need to be examined not only for functional and architectural purposes but also to take into consideration the distribution of data nodes, streaming capabilities, aggregation and queryability options and how we can integrate the different data processing software, like Spark, that can benefit from subtle but substantial model changes. A clear example is when embedding or referencing documents and their implications on high speed processing.
Over the course of this talk we will detail the benefits of a good document model for the operational workload. As well as what type of transformations we should incorporate in our document model to adjust for the high speed processing capabilities of Spark.
We will look into the different options that we have to connect these two different systems, how to model according to different workloads, what kind of operators we need to be aware of for top performance and what kind of design and architectures we should put in place to make sure that all of these systems work well together.
Over the course of the talk we will showcase different libraries that enable the integration between spark and MongoDB, such as MongoDB Hadoop Connector, Stratio Connector and MongoDB Spark Native Connector.
By the end of the talk I expect the attendees to have an understanding of:
How they connect their MongoDB clusters with Spark
Which use cases show a net benefit for connecting these two systems
What kind of architecture design should be considered for making the most of Spark + MongoDB
How documents can be modeled for better performance and operational process, while processing these data sets stored in MongoDB.
The talk is suitable for:
Developers that want to understand how to leverage Spark
Architects that want to integrate their existing MongoDB cluster and have real time high speed processing needs
Data scientists that know about Spark, are playing with Spark and want to integrate with MongoDB for their persistency layer
MongoDB San Francisco 2013: Storing eBay's Media Metadata on MongoDB present...MongoDB
This session will be a case study of eBay’s experience running MongoDB for project Zoom, in which eBay stores all media metadata for the site. This includes references to pictures of every item for sale on eBay. This cluster is eBay's first MongoDB installation on the platform and is a mission critical application. Yuri Finkelstein, an Enterprise Architect on the team, will provide a technical overview of the project and its underlying architecture.
Webinar: High Performance MongoDB Applications with IBM POWER8MongoDB
Innovative companies are building Internet of Things, mobile, content management, single view, and big data apps on top of MongoDB. In this session, we'll explore how the IBM POWER8 platform brings new levels of performance and ease of configuration to these solutions which already benefit from easier and faster design and development using MongoDB.
Webinar: Simplifying the Database Experience with MongoDB AtlasMongoDB
MongoDB Atlas is our database as a service for MongoDB. In this webinar you’ll learn how it provides all of the features of MongoDB, without all of the operational heavy lifting, and all through a pay-as-you-go model billed on an hourly basis.
Rapid Development and Performance By Transitioning from RDBMSs to MongoDB
Modern day application requirements demand rich & dynamic data structures, fast response times, easy scaling, and low TCO to match the rapidly changing customer & business requirements plus the powerful programming languages used in today's software landscape.
Traditional approaches to solutions development with RDBMSs increasingly expose the gap between the modern development languages and the relational data model, and between scaling up vs. scaling horizontally on commodity hardware. Development time is wasted as the bulk of the work has shifted from adding business features to struggling with the RDBMSs.
MongoDB, the premier NoSQL database, offers a flexible and scalable solution to focus on quickly adding business value again.
In this session, we will provide:
- Overview of MongoDB's capabilities
- Code-level exploration of the MongoDB programming model and APIs and how they transform the way developers interact with a database
- Update of the exciting features in MongoDB 3.0
A fotopedia presentation made at the MongoDay 2012 in Paris at Xebia Office.
Talk by Pierre Baillet and Mathieu Poumeyrol.
French Article about the presentation:
http://www.touilleur-express.fr/2012/02/06/mongodb-retour-sur-experience-chez-fotopedia/
Video to come.
Speaker: Ronan Bohan, Solutions Architect, MongoDB
Speaker: Viady Krishnan
Level: 100 (Beginner)
Track: Jumpstart
Get started with the BI connector and Tableau in this introductory session. We will give you insight into how you can view your MongoDB data in traditional BI tools and an overview of connecting Tableau with MongoDB. After attending this session, students should be able connect their analytics tool of choice to a MongoDB data store using the BI connector, secure their client connection, and know how to enable authentication. Audience members should be familiar with analytics tools like Tableau to do business analytics, and know how to set up and run analytics in a BI tool. This session will use Tableau as an example.
This is a Jumpstart session, held before the keynotes, designed to give you an overview of MongoDB basics so you can dive into more advanced technical sessions later in the day.
What You Will Learn:
- How to connect your analytics tool of choice to a MongoDB data store using the BI connector.
- How to view MongoDB data in Tableau or another BI tool.
- How to secure your client connection to MongoDB.
Webinar: Faster Big Data Analytics with MongoDBMongoDB
Learn how to leverage MongoDB and Big Data technologies to derive rich business insight and build high performance business intelligence platforms. This presentation includes:
- Uncovering Opportunities with Big Data analytics
- Challenges of real-time data processing
- Best practices for performance optimization
- Real world case study
This presentation was given in partnership with CIGNEX Datamatics.
Presented by Claudius Li, Solutions Architect at MongoDB, at MongoDB Evenings New England 2017.
MongoDB Atlas is the premier database as a service offering. Find out how MongoDB Atlas can help your team to deploy more easily, develop faster and easily manage deployment, maintenance, upgrades and expansions. We will also demonstrate some of the key features and tools that come with MongoDB Atlas.
Webinar: Elevate Your Enterprise Architecture with In-Memory ComputingMongoDB
The advantages of in-memory computing are well understood. Data can be accessed in RAM nearly 100,000 times faster than retrieving it from disk, delivering orders-of-magnitude higher performance for the most demanding applications. Examples include real-time re-scoring of personalized product recommendations as users are browsing a site, or trading stocks in immediate response to market events.
In this webinar, we’ll briefly explore the trends driving in-memory computing (IMC), the challenges that surround it, and how MongoDB fits into the big picture.
Topics covered in this session will include:
- IMC use cases and customer case studies
- Critical capabilities and components of IMC
- How MongoDB plays a role in an overall IMC strategy within your enterprise architecture
- Suggested architectures related to MongoDB’s in-memory capabilities:
-- Integration with Apache Spark
-- In-Memory Storage Engine
-- Integration with BI tools
Unlocking Operational Intelligence from the Data LakeMongoDB
Hadoop-based data lakes are enabling enterprises and governments to efficiently capture and analyze unprecedented volumes of data. Join this webinar to learn how digital transformation is driving the rise of the data lake, the role Hadoop plays in generating new classes of analytics and insight, the critical capabilities you need to evaluate in an operational database for your data lake, and more.
MongoDB Evenings Dallas: What's the Scoop on MongoDB & HadoopMongoDB
What's the Scoop on MongoDB & Hadoop
Jake Angerman, Sr. Solutions Architect, MongoDB
MongoDB Evenings Dallas
March 30, 2016 at the Addison Treehouse, Dallas, TX
AWS is an incredibly popular environment for running MongoDB deployments. Today you have many choices about instance type, storage, network config, security, how you configure MongoDB processes, and more. In addition, you now have options when it comes to tooling to help you manage and operate your deployment. In this session, we’ll take a look at several recommendations that can help you get the best performance out of AWS.
Connecting Teradata and MongoDB with QueryGridMongoDB
This is where SQL and NoSQL work together. This session will drill into the technical details on how to join MongoDB documents with data warehouse tables to perform new levels of analytics. And this can be done by business users using popular BI tools. Seamless self-service data access will be accomplished via a simple SQL JSON notation from Teradata to MongoDB. Now, no more time and effort will be required to co-locate data from both platforms in order to analyze it! Using the TeradataQueryGrid connector to MongoDB enables users to access data on two systems transparently in a self-service manner. We will explore how the shards and query routers exchange data with SQL based systems.
Learn what you need to consider when moving from the world of relational databases to a NoSQL document store.
Hear from Developer Advocate Glynn Bird as he explains the key differences between relational databases and JSON document stores like Cloudant, as well as how to dodge the pitfalls of migrating from a relational database to NoSQL.
MongoDB Europe 2016 - Using MongoDB to Build a Fast and Scalable Content Repo...MongoDB
MongoDB can be used in the Nuxeo Platform as a replacement for more traditional SQL databases. Nuxeo's content repository, which is the cornerstone of this open source enterprise content management platform, integrates completely with MongoDB for data storage. This presentation will explain the motivation for using MongoDB and will emphasize the different implementation choices driven by the very nature of a NoSQL datastore like MongoDB. Learn how Nuxeo integrated MongoDB into the platform which resulted in increased performance (including actual benchmarks) and better response to some use cases.
Speaker: Jerry Reghunadh, Architect, CAPIOT Software Pvt. Ltd.
Level: 200 (Intermediate)
Track: Microservices
One of the leading assisted e-commerce players in India approached CAPIOT to rebuild their ERP system from the ground up. Their existing PHP-MySQL setup, while rich in functionality and having served them well for under half a decade, would not scale to meet future demands due to the exponential grown they were experiencing.
We built the entire system using a microservices architecture. To develop APIs we used Node.js, Express, Swagger and Mongoose, and MongoDB was used as the active data store. During the development phase, we solved several problems ranging from cross-service calls, data consistency, service discovery, and security.
One of the issues that we faced is how to effectively design and make cross-service calls. Should we implement a cross-service call for every document that we require or should we duplicate and distribute the data, reducing cross-service calls? We found a balance between these two and engineered a solution that gave us good performance.
In addition, our current system has 36 independent services. We enabled services to auto-discover and make secure calls.
We used Swagger to define our APIs first and enforce request and response validations and Mongoose as our ODM for schema validation. We also heavily depend on pre-save hooks to validate data and post-save hooks to trigger changes in other systems. This API-driven approach vastly enabled our frontend and backend teams to scrum together on a single API spec without worrying about the repercussions of changing API schemas.
What You Will Learn:
- How we used Swagger and Mongoose to off-load validations and schema enforcements. We used Swagger to define our APIs first and enforce request and response validations and Mongoose as our ODM for schema validation. We also heavily depend on pre-save hooks to validate data and post-save hooks to trigger changes in other systems. This API-driven approach vastly enabled our frontend and backend teams to scrum together on a single API spec without worrying about the repercussions of changing API schemas.
- How microservices and cross-service calls work. One of the issues that we faced is how to effectively design and make cross-service calls. Should we implement a cross-service call for every document that we require or should we duplicate and distribute the data, reducing cross-service calls? We found a balance between these two and engineered a solution that gave us good performance.
- How we implemented microservice auto discovery: Our current system has 36 independent services, so we enabled services to auto-discover and make secure calls.
<b>Elevate MongoDB with ODBC/JDBC </b>[4:05 pm - 4:25 pm]<br />Adoption for MongoDB is growing across the enterprise and disrupting existing business intelligence, analytics and data integration infrastructure. Join us to disrupt that disruption using ODBC and JDBC access to MongoDB for instant out-of-box integration with existing infrastructure to elevate and expand your organization’s MongoDB footprint. We'll talk about common challenges and gotchas that shops face when exposing unstructured and semi-structured data using these established data connectivity standards. Existing infrastructure requirements should not dictate developers’ freedom of choice in a database
Webinar: Live Data Visualisation with Tableau and MongoDBMongoDB
MongoDB 3.2 introduces a new way for familiar Business Intelligence (BI) tools to access your real-time operational data – opening it up to data analysts and data scientist, enabling new insights to be discovered faster than ever before. Tableau accesses the JSON document data stored in MongoDB via this new BI connector. We will cover how the BI connector works by creating a relational view definition of a JSON data set that is then used to present a tabular SQL/ODBC interface to Tableau. Then we will set-up a live connection from Tableau Desktop to the MongoDB Connector for BI. Once we have Tableau Desktop and MongoDB connected, we will demonstrate the visual power of Tableau to explore the agile data storage of MongoDB. This webinar will cover:
What is the MongoDB BI Connector?
Setting up a connection from Tableau to the MongoDB BI Connector.
How to perform data discovery Tableau connected to MongoDB live data.
Publishing a Tableau Dashboard for sharing insights.
SlamData - How MongoDB Is Powering a Revolution in Visual AnalyticsJohn De Goes
Slides from my presentation at MongoDB Days Silicon Valley. I discuss what SlamData is, the challenges it had to solve to build an analytics solution in the NoSQL space, and how specific features of MongoDB help power its advanced analytics functionality.
Webinar: High Performance MongoDB Applications with IBM POWER8MongoDB
Innovative companies are building Internet of Things, mobile, content management, single view, and big data apps on top of MongoDB. In this session, we'll explore how the IBM POWER8 platform brings new levels of performance and ease of configuration to these solutions which already benefit from easier and faster design and development using MongoDB.
Webinar: Simplifying the Database Experience with MongoDB AtlasMongoDB
MongoDB Atlas is our database as a service for MongoDB. In this webinar you’ll learn how it provides all of the features of MongoDB, without all of the operational heavy lifting, and all through a pay-as-you-go model billed on an hourly basis.
Rapid Development and Performance By Transitioning from RDBMSs to MongoDB
Modern day application requirements demand rich & dynamic data structures, fast response times, easy scaling, and low TCO to match the rapidly changing customer & business requirements plus the powerful programming languages used in today's software landscape.
Traditional approaches to solutions development with RDBMSs increasingly expose the gap between the modern development languages and the relational data model, and between scaling up vs. scaling horizontally on commodity hardware. Development time is wasted as the bulk of the work has shifted from adding business features to struggling with the RDBMSs.
MongoDB, the premier NoSQL database, offers a flexible and scalable solution to focus on quickly adding business value again.
In this session, we will provide:
- Overview of MongoDB's capabilities
- Code-level exploration of the MongoDB programming model and APIs and how they transform the way developers interact with a database
- Update of the exciting features in MongoDB 3.0
A fotopedia presentation made at the MongoDay 2012 in Paris at Xebia Office.
Talk by Pierre Baillet and Mathieu Poumeyrol.
French Article about the presentation:
http://www.touilleur-express.fr/2012/02/06/mongodb-retour-sur-experience-chez-fotopedia/
Video to come.
Speaker: Ronan Bohan, Solutions Architect, MongoDB
Speaker: Viady Krishnan
Level: 100 (Beginner)
Track: Jumpstart
Get started with the BI connector and Tableau in this introductory session. We will give you insight into how you can view your MongoDB data in traditional BI tools and an overview of connecting Tableau with MongoDB. After attending this session, students should be able connect their analytics tool of choice to a MongoDB data store using the BI connector, secure their client connection, and know how to enable authentication. Audience members should be familiar with analytics tools like Tableau to do business analytics, and know how to set up and run analytics in a BI tool. This session will use Tableau as an example.
This is a Jumpstart session, held before the keynotes, designed to give you an overview of MongoDB basics so you can dive into more advanced technical sessions later in the day.
What You Will Learn:
- How to connect your analytics tool of choice to a MongoDB data store using the BI connector.
- How to view MongoDB data in Tableau or another BI tool.
- How to secure your client connection to MongoDB.
Webinar: Faster Big Data Analytics with MongoDBMongoDB
Learn how to leverage MongoDB and Big Data technologies to derive rich business insight and build high performance business intelligence platforms. This presentation includes:
- Uncovering Opportunities with Big Data analytics
- Challenges of real-time data processing
- Best practices for performance optimization
- Real world case study
This presentation was given in partnership with CIGNEX Datamatics.
Presented by Claudius Li, Solutions Architect at MongoDB, at MongoDB Evenings New England 2017.
MongoDB Atlas is the premier database as a service offering. Find out how MongoDB Atlas can help your team to deploy more easily, develop faster and easily manage deployment, maintenance, upgrades and expansions. We will also demonstrate some of the key features and tools that come with MongoDB Atlas.
Webinar: Elevate Your Enterprise Architecture with In-Memory ComputingMongoDB
The advantages of in-memory computing are well understood. Data can be accessed in RAM nearly 100,000 times faster than retrieving it from disk, delivering orders-of-magnitude higher performance for the most demanding applications. Examples include real-time re-scoring of personalized product recommendations as users are browsing a site, or trading stocks in immediate response to market events.
In this webinar, we’ll briefly explore the trends driving in-memory computing (IMC), the challenges that surround it, and how MongoDB fits into the big picture.
Topics covered in this session will include:
- IMC use cases and customer case studies
- Critical capabilities and components of IMC
- How MongoDB plays a role in an overall IMC strategy within your enterprise architecture
- Suggested architectures related to MongoDB’s in-memory capabilities:
-- Integration with Apache Spark
-- In-Memory Storage Engine
-- Integration with BI tools
Unlocking Operational Intelligence from the Data LakeMongoDB
Hadoop-based data lakes are enabling enterprises and governments to efficiently capture and analyze unprecedented volumes of data. Join this webinar to learn how digital transformation is driving the rise of the data lake, the role Hadoop plays in generating new classes of analytics and insight, the critical capabilities you need to evaluate in an operational database for your data lake, and more.
MongoDB Evenings Dallas: What's the Scoop on MongoDB & HadoopMongoDB
What's the Scoop on MongoDB & Hadoop
Jake Angerman, Sr. Solutions Architect, MongoDB
MongoDB Evenings Dallas
March 30, 2016 at the Addison Treehouse, Dallas, TX
AWS is an incredibly popular environment for running MongoDB deployments. Today you have many choices about instance type, storage, network config, security, how you configure MongoDB processes, and more. In addition, you now have options when it comes to tooling to help you manage and operate your deployment. In this session, we’ll take a look at several recommendations that can help you get the best performance out of AWS.
Connecting Teradata and MongoDB with QueryGridMongoDB
This is where SQL and NoSQL work together. This session will drill into the technical details on how to join MongoDB documents with data warehouse tables to perform new levels of analytics. And this can be done by business users using popular BI tools. Seamless self-service data access will be accomplished via a simple SQL JSON notation from Teradata to MongoDB. Now, no more time and effort will be required to co-locate data from both platforms in order to analyze it! Using the TeradataQueryGrid connector to MongoDB enables users to access data on two systems transparently in a self-service manner. We will explore how the shards and query routers exchange data with SQL based systems.
Learn what you need to consider when moving from the world of relational databases to a NoSQL document store.
Hear from Developer Advocate Glynn Bird as he explains the key differences between relational databases and JSON document stores like Cloudant, as well as how to dodge the pitfalls of migrating from a relational database to NoSQL.
MongoDB Europe 2016 - Using MongoDB to Build a Fast and Scalable Content Repo...MongoDB
MongoDB can be used in the Nuxeo Platform as a replacement for more traditional SQL databases. Nuxeo's content repository, which is the cornerstone of this open source enterprise content management platform, integrates completely with MongoDB for data storage. This presentation will explain the motivation for using MongoDB and will emphasize the different implementation choices driven by the very nature of a NoSQL datastore like MongoDB. Learn how Nuxeo integrated MongoDB into the platform which resulted in increased performance (including actual benchmarks) and better response to some use cases.
Speaker: Jerry Reghunadh, Architect, CAPIOT Software Pvt. Ltd.
Level: 200 (Intermediate)
Track: Microservices
One of the leading assisted e-commerce players in India approached CAPIOT to rebuild their ERP system from the ground up. Their existing PHP-MySQL setup, while rich in functionality and having served them well for under half a decade, would not scale to meet future demands due to the exponential grown they were experiencing.
We built the entire system using a microservices architecture. To develop APIs we used Node.js, Express, Swagger and Mongoose, and MongoDB was used as the active data store. During the development phase, we solved several problems ranging from cross-service calls, data consistency, service discovery, and security.
One of the issues that we faced is how to effectively design and make cross-service calls. Should we implement a cross-service call for every document that we require or should we duplicate and distribute the data, reducing cross-service calls? We found a balance between these two and engineered a solution that gave us good performance.
In addition, our current system has 36 independent services. We enabled services to auto-discover and make secure calls.
We used Swagger to define our APIs first and enforce request and response validations and Mongoose as our ODM for schema validation. We also heavily depend on pre-save hooks to validate data and post-save hooks to trigger changes in other systems. This API-driven approach vastly enabled our frontend and backend teams to scrum together on a single API spec without worrying about the repercussions of changing API schemas.
What You Will Learn:
- How we used Swagger and Mongoose to off-load validations and schema enforcements. We used Swagger to define our APIs first and enforce request and response validations and Mongoose as our ODM for schema validation. We also heavily depend on pre-save hooks to validate data and post-save hooks to trigger changes in other systems. This API-driven approach vastly enabled our frontend and backend teams to scrum together on a single API spec without worrying about the repercussions of changing API schemas.
- How microservices and cross-service calls work. One of the issues that we faced is how to effectively design and make cross-service calls. Should we implement a cross-service call for every document that we require or should we duplicate and distribute the data, reducing cross-service calls? We found a balance between these two and engineered a solution that gave us good performance.
- How we implemented microservice auto discovery: Our current system has 36 independent services, so we enabled services to auto-discover and make secure calls.
<b>Elevate MongoDB with ODBC/JDBC </b>[4:05 pm - 4:25 pm]<br />Adoption for MongoDB is growing across the enterprise and disrupting existing business intelligence, analytics and data integration infrastructure. Join us to disrupt that disruption using ODBC and JDBC access to MongoDB for instant out-of-box integration with existing infrastructure to elevate and expand your organization’s MongoDB footprint. We'll talk about common challenges and gotchas that shops face when exposing unstructured and semi-structured data using these established data connectivity standards. Existing infrastructure requirements should not dictate developers’ freedom of choice in a database
Webinar: Live Data Visualisation with Tableau and MongoDBMongoDB
MongoDB 3.2 introduces a new way for familiar Business Intelligence (BI) tools to access your real-time operational data – opening it up to data analysts and data scientist, enabling new insights to be discovered faster than ever before. Tableau accesses the JSON document data stored in MongoDB via this new BI connector. We will cover how the BI connector works by creating a relational view definition of a JSON data set that is then used to present a tabular SQL/ODBC interface to Tableau. Then we will set-up a live connection from Tableau Desktop to the MongoDB Connector for BI. Once we have Tableau Desktop and MongoDB connected, we will demonstrate the visual power of Tableau to explore the agile data storage of MongoDB. This webinar will cover:
What is the MongoDB BI Connector?
Setting up a connection from Tableau to the MongoDB BI Connector.
How to perform data discovery Tableau connected to MongoDB live data.
Publishing a Tableau Dashboard for sharing insights.
SlamData - How MongoDB Is Powering a Revolution in Visual AnalyticsJohn De Goes
Slides from my presentation at MongoDB Days Silicon Valley. I discuss what SlamData is, the challenges it had to solve to build an analytics solution in the NoSQL space, and how specific features of MongoDB help power its advanced analytics functionality.
<b>Blending Hadoop and MongoDB with Pentaho </b>[11:10 am - 11:30 am]<br />For eCommerce companies, knowing how promoted wish-lists can spark consumer spending is an analytics goldmine. In this lightning talk, Bo Borland will demonstrate how Pentaho analytics can blend click-stream data about promoted wish-lists with sales transaction records using Hadoop, MongoDB and Pentaho to reveal patterns in online shopping behavior. Regardless of your industry or specific use model, come to this session to learn how to blend MongoDB data with any data source for greater business insight. Pentaho offers the first end-to-end analytic solution for MongoDB. From data ingestion to pixel perfect reporting and ad hoc “slice and dice” analysis, the solution meets today’s growing demand for a 360-degree view of your business.
In today's businesses, an application going down can mean millions of dollars in lost revenue. Learn how to optimize the performance of your enterprise applications powered by MongoDB with IBM Application Performance Management (APM). IBM APM will give you full visibility into your application stack and infrastructure, track every transaction going through it, and help you diagnose problems in mere minutes. With built-in analytics to predict outages before they occur and integration directly into MMS, IBM APM is a must-have solution to keep your business-critical applications up and your revenue flowing.
Consolidate and Simplify MongoDB Infrastructure with All-flashMongoDB
<b>Consolidate and Simplify MongoDB Infrastructure with all-flash </b>[3:40 pm - 4:00 pm]<br />Even the most well-written MongoDB applications can be limited by legacy infrastructure, which is why so many MongoDB customers have migrated their internal storage to an all-flash SAN. Join us in this session as we profile two example customers in their migration to all-flash, where benefits include breakthrough performance, dramatic, reduction in datacenter footprint, and simplified management.
Transforming your Business with Scale-Out Flash: How MongoDB & Flash Accelera...MongoDB
<b>Transforming your Business with Scale-Out Flash: How MongoDB & Flash Accelerate Application Performance </b>[1:40 pm - 2:00 pm]<br />MongoDB lets you build next-generation applications that require new levels of performance and latency. Flash has become a critical component to meeting these needs and this session will focus on how to best leverage Flash in a MongoDB deployment, covering key best practices and approaches. Armed with these best practices, as your environment scales, the on-going management of Flash within a traditional DAS architecture may still introduce some fundamental challenges. In addition, we will introduce EMC’s XtremIO platform which fully automates and offloads this overhead, allowing MongoDB administrators and architects to focus on driving new capabilities into their applications, all while scaling infinitely. In addition, key features like data-reduction, agile copy services, and free encryption extend the value of Flash well beyond what can be done with traditional DAS architectures.
Bluemix provides developers with multiple open-source compute options to run their apps, chief among them Cloud Foundry, the world’s leading platform-as-a-service (PaaS) offering. Cloud Foundry enables teams to practice continuous delivery by supporting the full software development lifecycle, from dev to deployment. One of the key advantages of the platform is the ability it gives developers to easily configure and start using a MongoDB datastore for their application. In this lightning talk, Bluemix developer advocate Jake Peyser will go over Cloud Foundry and best practices for data storage when using the platform. He will then take attendees through a live demo where he will show users how to quickly configure a MongoDB instance in Bluemix and connect it to an application.
Redis & MongoDB: Stop Big Data Indigestion Before It StartsMongoDB
<b>MongoDB @ Redis Labs </b>[10:40 am - 11:00 am]<br />Efficiently digesting data in large volumes can prove to be challenging for any database. The challenges are compounded when this influx must be analyzed on the fly, or ""tasted"", to satisfy the sophisticated palates of modern apps. Luckily, there are several proven remedies you can concoct with Redis to help with potential indigestion.
MongoDB Linux Porting, Performance Measurements and and Scaling Advantage usi...MongoDB
MongoDB has been ported onto Linux on z Systems. MongoDB Performance benefits from the superior single thread performance of System z processor and system design. The goal of the presentation is to demonstrate the value of running MongoDB on Linux for Systems z by comparing scaling behavior of MongoDB sharding on x86 and mainframe. The presentation will give details on performance numbers and scaling behavior of MongoDB on Systems z versus Intel based servers. The presentation will also sketch how MongoDB sharding on Linux on z Systems can be dockerized to facilitate the setup.
Overcoming Scaling Challenges in MongoDB Deployments with SSDMongoDB
Horizontal scaling of databases can increase performance and capacity, but adding nodes also increases infrastructure and management complexities. Cluster management can challenge even the most seasoned IT professional. While vertical scaling is easier to implement, it has traditionally been limited by memory and disk throughput. As both SSD latency and price continue to improve, the MongoDB database scaling equation changes. This session will review a number of SSD technologies that Intel employs (SATA, NVMe) and their impacts on I/O performance and database scaling. We will look at various architectural options for optimizing I/O based on our discussions with real world users. We will also provide attendees a glimpse at our future plans in terms of technologies in the storage area.
Running Natural Language Queries on MongoDBMongoDB
One of the most sought-after features of any user centric web application is the search functionality. QBurst revamped the search interface by using NLP and integrating with MongoDB. Their solution is designed to identify key components and operators within a natural language query and use it against MongoDB to extract records. This session explains QBurst's technical solution in detail.
Webinar: Working with Graph Data in MongoDBMongoDB
With the release of MongoDB 3.4, the number of applications that can take advantage of MongoDB has expanded. In this session we will look at using MongoDB for representing graphs and how graph relationships can be modeled in MongoDB.
We will also look at a new aggregation operation that we recently implemented for graph traversal and computing transitive closure. We will include an overview of the new operator and provide examples of how you can exploit this new feature in your MongoDB applications.
Data - and the things we want to do with data - exist in many different forms. Getting those formats and tasks to play nicely together can sometimes be a painstaking grind. The difficulty escalates if we need to switch between specialized tools, designed to address only a small subset of what we need to accomplish.
Enter, the Composable DataFlow.
Composable DataFlows are event-driven pipelines that consist of functional modules, strung together to form full analytical workflows. For developers, DataFlows can represent independently-deployable microservices, and can be used as part of a broader Microservice Architecture.
In this session, we will use a Composable DataFlow to extract data via API, transform JSON into a tabular structure, and load that data into a database of our own creation (using Composable DataPortal). We will also explore the DataFlow's Module Library to see what other options we have to help make our data... flow.
Meetup: https://www.meetup.com/boston-data-engineering/events/289525162/
Myth Busters II: BI Tools and Data Virtualization are InterchangeableDenodo
Watch Here: https://bit.ly/2NcqU6F
We take on the 2nd myth about data virtualization and it’s one that suggests a BI tool can substitute a data virtualization software.
You might be thinking: If I can have multi-source queries and define a logical model in my reporting tool, why would I need a data virtualization software?
Reporting tools, no doubt important and necessary, focus on the visualization of data and it’s presentation to the business user. Data virtualization is a governed data access layer designed to connect to and provide transparency of all enterprise data.
Yet the myth suggests that these technologies are interchangeable. So we’re going to take it on!
Watch this webinar as we compare and contrast BI tools and data virtualization to draw a final conclusion.
Accelerate Self-Service Analytics with Data Virtualization and VisualizationDenodo
Watch full webinar here: https://bit.ly/39AhUB7
Enterprise organizations are shifting to self-service analytics as business users need real-time access to holistic and consistent views of data regardless of its location, source or type for arriving at critical decisions.
Data Virtualization and Data Visualization work together through a universal semantic layer. Learn how they enable self-service data discovery and improve performance of your reports and dashboards.
In this session, you will learn:
- Challenges faced by business users
- How data virtualization enables self-service analytics
- Use case and lessons from customer success
- Overview of the highlight features in Tableau
Best practices to deliver data analytics to the business with power biSatya Shyam K Jayanty
Get your data to life with Power BI visualization and insights!
With the changing landscape of Power BI features it is essential to get hold of configuration and deployment practices within your data platform that will ensure you are on-par with compliance & security practices. In this session we will overview from the basics leading into advanced tricks on this landscape:
How to deploy Power BI?
How to implement configuration parameters and package BI features as a part of Office 365 roll out in your organisation?
What are newest features and enhancements on this Power BI landscape?
How to manage on-premise vs on-cloud connectivity?
How can you help and support the Power BI community as well?
Having said that within the objectives of this session, cloud computing is another aspect of this technology made is possible to get data within few clicks and ticks to the end-user. Let us review how to manage & connect on-premise data to cloud capabilities that can offer full advantage of data catalogue capabilities by keeping data secure as per Information Governance standards. Not just with nuts and bolts, performance is another aspect that every Admin is keeping up, let us look into few settings on how to maximize performance to optimize access to data as required. Gain understanding and insight into number of tools that are available for your Business Intelligence needs. There will be a showcase of events to demonstrate where to begin and how to proceed in BI world.
- D BI A Consulting
consulting@dbia.uk
Horses for Courses: Database RoundtableEric Kavanagh
The blessing and curse of today's database market? So many choices! While relational databases still dominate the day-to-day business, a host of alternatives has evolved around very specific use cases: graph, document, NoSQL, hybrid (HTAP), column store, the list goes on. And the database tools market is teeming with activity as well. Register for this special Research Webcast to hear Dr. Robin Bloor share his early findings about the evolving database market. He'll be joined by Steve Sarsfield of HPE Vertica, and Robert Reeves of Datical in a roundtable discussion with Bloor Group CEO Eric Kavanagh. Send any questions to info@insideanalysis.com, or tweet with #DBSurvival.
Customer migration to azure sql database from on-premises SQL, for a SaaS app...George Walters
Why would someone take a working on-premises SaaS infrastructure, and migrate it to Azure? We review the technology decisions behind this conversion, and business choices behind migrating to Azure. The SQL 2012 infrastructure and application was migrated to PaaS Services. Finally, how would we do this architecture in 2019.
Relational databases were conceived to digitize paper forms and automate well-structured business processes, and still have their uses. But RDBMS cannot model or store data and its relationships without complexity, which means performance degrades with the increasing number and levels of data relationships and data size. Additionally, new types of data and data relationships require schema redesign that increases time to market.
A graph database like Neo4j naturally stores, manages, analyzes, and uses data within the context of connections meaning Neo4j provides faster query performance and vastly improved flexibility in handling complex hierarchies than SQL. Join this webinar to learn why companies are shifting away from RDBMS towards graphs to unlock the business value in their data relationships.
Ryan Boyd, Developer Relations at Neo4j
Ryan is a SF-based software engineer focused on helping developers understand the power of graph databases. Previously he was a product manager for architectural software, built applications and web hosting environments for higher education, and worked in developer relations for twenty products during his 8 years at Google. He enjoys cycling, sailing, skydiving, and many other adventures when not in front of his computer.
DAS Slides: Data Architect vs. Data Engineer vs. Data ModelerDATAVERSITY
The increasing focus on data in today’s organization has increased demand for critical roles such as data architect, data engineer, and data modeler. But there is often confusion and ambiguity around what these roles entail, and what overlap exists between them. This webinar will discuss these data-centric roles and their place in the data-driven organization.
Choosing technologies for a big data solution in the cloudJames Serra
Has your company been building data warehouses for years using SQL Server? And are you now tasked with creating or moving your data warehouse to the cloud and modernizing it to support “Big Data”? What technologies and tools should use? That is what this presentation will help you answer. First we will cover what questions to ask concerning data (type, size, frequency), reporting, performance needs, on-prem vs cloud, staff technology skills, OSS requirements, cost, and MDM needs. Then we will show you common big data architecture solutions and help you to answer questions such as: Where do I store the data? Should I use a data lake? Do I still need a cube? What about Hadoop/NoSQL? Do I need the power of MPP? Should I build a "logical data warehouse"? What is this lambda architecture? Can I use Hadoop for my DW? Finally, we’ll show some architectures of real-world customer big data solutions. Come to this session to get started down the path to making the proper technology choices in moving to the cloud.
Amazon QuickSight is a fast, cloud-powered business intelligence (BI) service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data. In this session, we demonstrate how you can point Amazon QuickSight to AWS data stores, flat files, or other third-party data sources and begin visualizing your data in minutes. We also introduce SPICE - a new Super-fast, Parallel, In-memory, Calculation Engine in Amazon QuickSight, which performs advanced calculations and render visualizations rapidly without requiring any additional infrastructure, SQL programming, or dimensional modeling, so you can seamlessly scale to hundreds of thousands of users and petabytes of data. Lastly, you will see how Amazon QuickSight provides you with smart visualizations and graphs that are optimized for your different data types, to ensure the most suitable and appropriate visualization to conduct your analysis, and how to share these visualization stories using the built-in collaboration tools.
Presented by: Matthew McClean, AWS Partner Solutions Architect, Amazon Web Services
TestGuild and QuerySurge Presentation -DevOps for Data TestingRTTS
This slide deck is from one of our 4 webinars in our half-day series in conjunction with Test Guild.
Chris Thompson and Mike Calabrese, Senior Solution Architects and QuerySurge experts, provide great information, a demo and lots of humor in this webinar on how to implement DevOps for Data in your DataOps pipeline.
This webinar was performed in conjunction with Test Guild.
To watch the video, go to:
https://youtu.be/1ihuRPgY_rs
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB
During this talk we'll navigate through a customer's journey as they migrate an existing MongoDB deployment to MongoDB Atlas. While the migration itself can be as simple as a few clicks, the prep/post effort requires due diligence to ensure a smooth transfer. We'll cover these steps in detail and provide best practices. In addition, we’ll provide an overview of what to consider when migrating other cloud data stores, traditional databases and MongoDB imitations to MongoDB Atlas.
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB
MongoDB Kubernetes operator and MongoDB Open Service Broker are ready for production operations. Learn about how MongoDB can be used with the most popular container orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications. A demo will show you how easy it is to enable MongoDB clusters as an External Service using the Open Service Broker API for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB
Are you new to schema design for MongoDB, or are you looking for a more complete or agile process than what you are following currently? In this talk, we will guide you through the phases of a flexible methodology that you can apply to projects ranging from small to large with very demanding requirements.
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB
Humana, like many companies, is tackling the challenge of creating real-time insights from data that is diverse and rapidly changing. This is our journey of how we used MongoDB to combined traditional batch approaches with streaming technologies to provide continues alerting capabilities from real-time data streams.
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB
Time series data is increasingly at the heart of modern applications - think IoT, stock trading, clickstreams, social media, and more. With the move from batch to real time systems, the efficient capture and analysis of time series data can enable organizations to better detect and respond to events ahead of their competitors or to improve operational efficiency to reduce cost and risk. Working with time series data is often different from regular application data, and there are best practices you should observe.
This talk covers:
Common components of an IoT solution
The challenges involved with managing time-series data in IoT applications
Different schema designs, and how these affect memory and disk utilization – two critical factors in application performance.
How to query, analyze and present IoT time-series data using MongoDB Compass and MongoDB Charts
At the end of the session, you will have a better understanding of key best practices in managing IoT time-series data with MongoDB.
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB
Our clients have unique use cases and data patterns that mandate the choice of a particular strategy. To implement these strategies, it is mandatory that we unlearn a lot of relational concepts while designing and rapidly developing efficient applications on NoSQL. In this session, we will talk about some of our client use cases, the strategies we have adopted, and the features of MongoDB that assisted in implementing these strategies.
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB
Encryption is not a new concept to MongoDB. Encryption may occur in-transit (with TLS) and at-rest (with the encrypted storage engine). But MongoDB 4.2 introduces support for Client Side Encryption, ensuring the most sensitive data is encrypted before ever leaving the client application. Even full access to your MongoDB servers is not enough to decrypt this data. And better yet, Client Side Encryption can be enabled at the "flick of a switch".
This session covers using Client Side Encryption in your applications. This includes the necessary setup, how to encrypt data without sacrificing queryability, and what trade-offs to expect.
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB
MongoDB Kubernetes operator is ready for prime-time. Learn about how MongoDB can be used with most popular orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications.
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB
When you need to model data, is your first instinct to start breaking it down into rows and columns? Mine used to be too. When you want to develop apps in a modern, agile way, NoSQL databases can be the best option. Come to this talk to learn how to take advantage of all that NoSQL databases have to offer and discover the benefits of changing your mindset from the legacy, tabular way of modeling data. We’ll compare and contrast the terms and concepts in SQL databases and MongoDB, explain the benefits of using MongoDB compared to SQL databases, and walk through data modeling basics so you feel confident as you begin using MongoDB.
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB
Query performance should be the unsung hero of an application, but without proper configuration, can become a constant headache. When used properly, MongoDB provides extremely powerful querying capabilities. In this session, we'll discuss concepts like equality, sort, range, managing query predicates versus sequential predicates, and best practices to building multikey indexes.
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB
Aggregation pipeline has been able to power your analysis of data since version 2.2. In 4.2 we added more power and now you can use it for more powerful queries, updates, and outputting your data to existing collections. Come hear how you can do everything with the pipeline, including single-view, ETL, data roll-ups and materialized views.
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB
Are you new to schema design for MongoDB, or are you looking for a more complete or agile process than what you are following currently? In this talk, we will guide you through the phases of a flexible methodology that you can apply to projects ranging from small to large with very demanding requirements.
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB
Virtual assistants are becoming the new norm when it comes to daily life, with Amazon’s Alexa being the leader in the space. As a developer, not only do you need to make web and mobile compliant applications, but you need to be able to support virtual assistants like Alexa. However, the process isn’t quite the same between the platforms.
How do you handle requests? Where do you store your data and work with it to create meaningful responses with little delay? How much of your code needs to change between platforms?
In this session we’ll see how to design and develop applications known as Skills for Amazon Alexa powered devices using the Go programming language and MongoDB.
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB
aux Core Data, appréciée par des centaines de milliers de développeurs. Apprenez ce qui rend Realm spécial et comment il peut être utilisé pour créer de meilleures applications plus rapidement.
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB
Il n’a jamais été aussi facile de commander en ligne et de se faire livrer en moins de 48h très souvent gratuitement. Cette simplicité d’usage cache un marché complexe de plus de 8000 milliards de $.
La data est bien connu du monde de la Supply Chain (itinéraires, informations sur les marchandises, douanes,…), mais la valeur de ces données opérationnelles reste peu exploitée. En alliant expertise métier et Data Science, Upply redéfinit les fondamentaux de la Supply Chain en proposant à chacun des acteurs de surmonter la volatilité et l’inefficacité du marché.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
Tableau & MongoDB: Visual Analytics at the Speed of Thought
1. Tableau & MongoDB –
Visual Analytics at the Speed of
Thought
MongoDB World 2015
Tableau Software
June 2nd, 2015
2. Introduction
Jeff Feng
Product Manager – Big
Data
@jtfeng
Clara Siegel
Product Manager
@clara_siegel
Our Viz-dentials
jfeng@tableau.co
m
csiegel@tableau.co
m
Clara – let me know
if this works for you!
14. Visual analytics
• Ad-hoc calculations
• New Calculation Editor
• Auto-complete for calculations
• Level of detail expressions
• Drag-and-drop analytics
• Instant Analytics
• Lasso and radial selection
• Geographic search
• New pan-and-zoom experience
• Demographic data layers
Tableau Server
• Vizportal - New Server &
improved interface
• Infinite scrolling
• Universal search
• Improved Permissions management
• High Availability Improvements
• REST APIs for provisioning and
content management
• Tabcmd Improvements
• New Admin Views
Performance
• Parallel queries
• Data Engine Vectorization
• Parallel aggregation
• Temp table support on Data Server
• Saved Query Caching
• Query Fusion
• Query Batch Ordering
• Shadow Extracts
User Experience
• Redesigned Start and Connect
Experience
• Enhanced Story Points Formatting
• Responsive Marks
• Fast Tooltips
• Thumbnail Previews in Desktop
• Reset button in continuous Quick
Filters
Data Preparation
• Excel Clean-up
• Pivot
• Data Split
• REGEX
• Metadata grid
• Data Extract API for Mac OS
• Publish and append in the TDE API
• Access files from SPSS, SAS and R
• Improved Salesforce connector
Mobile
• Redesigned App Experience
• Offline snapshots of Favorites
• Create and Edit calculations in
Mobile Authoring
19. Scenarios: 3 main sources of Big Data
Human-generated data
+ Social media
+ Emails, text messages
+ YouTube videos
Machine-generated data
+ Sensors
+ Internet of Things
Process-generated data
+ Business systems
+ Web logs
21. {
name: {
first: Michael,
last: Smith
},
hobbies: [ski, soccer],
district: Los Altos
}
{
name: {
first: Jennifer,
last: Gates
},
hobbies: [sing],
preschool: CCLC
}
Name Gender Age
Michael M 6
Jennifer F 3
RDBMS example
JSON example
JSON is both schema-less and
complex
23. Hierarchical Databases
HW & Application-
specific data
Relational Databases
Application independent
Scale-up architecture
Structured data only
Schema-on-write
Limited data processing
High cost
Today’s use cases are driving the need
for a new generation of databases
NoSQL Databases
ALL DATA – Structured
& Unstructured
Massive scale – scale-
out
Schema-on-read
Storage with Compute
Low cost
24. Broad access to Big Data platforms
Visual analytics without coding
Hybrid data architecture
Data blending across data sources
Platform query performance
Consistent interface to visualizing
Tableau plays a fundamental role in
Big Data analysis
30. DataApplication
Tableau users can connect to
MongoDB through an ODBC
interface ODBC Driver
ODBCInterface
DriverImplementation
SQL-92 MongoDB
Native API
31. Tableau users can connect to
MongoDB through an ODBC
interface ODBC Driver
ODBCInterface
DriverImplementation
SQL-92 MongoDB
Native API
Simba MongoDB ODBC Driver
Translates SQL into Native MongoDB
API calls
Allows users to infer or define schemas
on schema-less JSON data
Converts JSON data to relational data
Based on ODBC 3.80 Standard
Full 64-bit and 32-bit support
Full SQL support
32. Tableau + MongoDB Demo:
Simba Driver
<CLARA TO FILL IN OVER-RIDING QUESTION WE WANT TO
ANSWER OF OUR DATA> Clara – please fill in
question and/or add
slides you need to
setup your demo
33. <INSERT SLIDES FROM MONGODB>
Ron and/or Asya–
please insert any
slides you’d like to
include on the deep
dive for the new
interface to
MongoDB
34. Tableau + MongoDB Demo:
PostgreSQL Interface
<JEFF OR CLARA TO FILL IN OVER-RIDING QUESTION WE WANT
TO ANSWER OF OUR DATA> Jeff, Clara – please
fill in question and/or
add slides you need
to setup your demo
37. Jeff Feng | Product Manager @jtfengjfeng@tableau.co
m
Clara Siegel | Product Manager csiegel@tableau.
com
@clara_siegel
Ron Avnur | VP, Product Management ron.avnur@mongodb.c
om
Asya Kamsky | Principal Solutions
Architect
asya@mongodb.c
om
Abstract
Tableau enables people to ask questions of their data by bringing analysis and visualization together with revolutionary technology. In this session, you’ll learn how to leverage Tableau and MongoDB for visual analytics of rich JSON data at the speed of thought, dramatically reducing the time-to-insight for users. The talk will include interactive demos and best practices to drive smart and fast business insights.
We believe in the triumph of facts. We believe in unleashing human ingenuity. We believe in empowered workplaces. We enable people to contribute and make achievements that they consider to be the highest use of their skills, intellect, and capabilities. When this happens, they improve their lives, their organizations, and the world.
We do this by making software that helps people, regardless of technical skill, see and understand their data. This liberates people’s natural curiosity and creative energy. It enables them to have a conversation with their data that was never before possible – leading to valuable discoveries that challenge the status quo.
Traditional BI implementations are famous for falling flat on their face, leaving customers weary and shy of taking a chance on BI projects. History has left a sour taste amonst many companies leaving them un trustworthy of change.
We have 4 main products
We are SWIMMING in data.
Companies are literally DROWNING in a TIDAL WAVE of Big Data
And not only is it NOT going away… IT’S GETTING WORSE!
Eric Schmidt said that 5 EXABYTES of information, // that’s TEN to the 18 zeros // was created from the DAWN of civilization until 2003.
<CLICK>
NOW that amount of information is created every 2 days
Just remember the 4 V’s of Big Data: VOLUME, VELOCITY, VARIETY & VALUE
THIS is the ACADEMIC view of Big Data
There are 3 main signs or sources of Big Data
<CLICK>
Number ONE look for Process Generated Data. This includes data from business systems and web logs.
For business systems, if they want to analyze operational data from Salesforce, NetSuite, TFS, Alpo, Egencia, & Concur.
Or perhaps operational web log data from their website // THAT IS BIG DATA
<CLICK>
Number TWO look for Machine Generated Data // that includes sensors and the Internet of Things.
Smartphones generate TONS of data // As do cars, elevators and factory machinery
THAT IS BIG DATA
<CLICK>
Lastly, there’s all the human-generated data.
E-mail data, social data, document data // THAT IS BIG DATA
Something else you should know about Big Data
Only 20% of the world’s data is structured data
<CLICK>
Which means that MOST of the world’s data is UNSTRUCTURED // and NOT immediately accessible.
<CLICK>
We are moving from a WORLD of FLAT FILES to a WORLD of JSON
All of this wrecks havoc for visual analysis
We’re at the tip of the iceburg – LITERALLY <PAUSE>
<CLICK>
MongoDB is one of the best solutions for addressing the opportunity beneath the surface of the iceburg
Relational databases CANNOT solve the data challenges that we face today
Someday VERY SOON, we are going to look back at relational databases and think THIS
<CLICK>
Take this in for a second
Relational Databases are great when…
… you know the relationships in advance
… when the schema doesn’t change
… when your data is flat and not nested
… when the data fits on one machine
Today’s use cases are driving the NEED // for a new generation of databases
In the not so distant past, // we used Hierarchical Databases // which contained HW and Application Specific Data.
This meant that data was NOT easily reusable across applications
Then came the era of the Relational Database.
Relational databases have a lot of limitations relative to Hadoop & NoSQL
They have a scale-up architecture. // They can only handle structured data. // They are schema-on-write. // They have limited data processing capabilities. // And they are expensive.
That said, relational databases are still very relevant as a transaction source, // and they will continue to be
It’s just that relational databases are DECLINING as a Big Data DESTINATION. // Companies are RIGHT-SIZING their Relational DBs
In reaction, the Relational Database Vendors are creating reference architectures with Hadoop // to SLOW the bleeding
NoSQL Databases on the other hand // (of which Hadoop is one type) // invite you to store ALL OF YOUR DATA – both structured and unstructured.
They are designed to be distributed with a scale-out architecture, // meaning when you scale // you just add another BOX // instead of getting a BIGGER BOX
They allow you to be more AGILE // They allow for SCHEMA-ON-READ – this means you don’t have to actually DEFINE your schema // UNTIL you are ready to analyze it
They combine STORAGE together with COMPUTE
It’s CHEAPER!!! Think $300/TB for HADOOP, $1500/TB for TERADATA
Tableau plays a FUNDAMENTAL role // in the analysis of Big Data.
If you have ever walked the floors at any of the Big Data conferences, // you’ll see Tableau EVERYWHERE // and it all goes back to our core value proposition for ALL Data
So Why Does Tableau WIN for Big Data?
First, we provide broad access to Big Data platforms // – We have a number of direct connectors in our product for Hadoop, NoSQL and Cloud-based data sources
For Hadoop, our top partners include Cloudera, Hortonworks & MapR
For NoSQL, we have connectors to Datastax Cassandra & MarkLogic & hopefully soon MongoDB
And for Cloud, we have Google BigQuery, Amazon Redshift & Amazon EMR
We help to unlock huge data stores by enabling visual analytics without coding // – Data that is stored in Hadoop is NOT easily accessible, // ESPECIALLY to business users. // With Tableau, you don’t need to write code // – this extends the accessibility and usefulness of Big Data // to ALL users!
We have a hybrid data architecture - // Tableau can connect LIVE to data sources or bring it IN-MEMORY. // LIVE connectivity works great for the data exploration use case // OR when connecting to FAST, INTERACTIVE query engines such as Impala & Spark against large datasets. // IN ADDITION, we can also ACCELERATE slower data sources by using our in-memory Data Engine.
This enables TWO distinct use cases in Big Data: Data Exploration and Data Reporting
In the data exploration use case, // Tableau users connect directly to their data // to understand the shape of their data, // identify initial trends and outliers, // and decide what view of the data they want to expose to their end users
In the data reporting use case, // Tableau users access prepared views of the data // to create purpose-built dashboards and storypoints
We enable mashup with other data via data blending – As a Tableau user, // you are not FORCED to move any of your data, // NOR does it need to be in ONE place. It’s not just about BIG DATA, it’s about DISTRIBUTED DATA
We invest in our platform query performance – // ALL of those GREAT performance enhancements in V9 will really shine here. // Of the improvements, parallel queries are ESPECIALLY relevant on distributed architectures such as Hadoop
We provide a consistent interface to visualizing data - // If a user is accustomed to using Tableau for small data, // it’s the same familiar interface for the analysis of Big Data as well
Analyzing JSON data in a BI or data visualization tool traditionally requires an ETL process.
There are three reasons you would want to use an ETL process:
1. Data standardization or data cleansing – making the data more queryable
2. Representational transformation – changing the source data from nested & complex to flat & relational
3. Data movement – moving the data from the staging area to a target system
Performing traditional ETL requires additional operational overhead as well as longer time-to-insight due to the additional steps.
It also often requires IT involvement to help transform the data due to the limited availability and capability of “business user friendly” ETL tools
Interfaces that enable “schema-on-read” such as the Simba ODBC driver for MongoDB can eliminate aspects 2 & 3 for data transformation and data movement.
Representing nested data in a relational model is a big challenge.
The easiest way of representing nested data in a relational model is by simple flattening.
The drawback of this approach is that nested elements such as arrays can cause the flattened relational model to become very sparse.
A preferred method of representing nested data in a relational model is to create a separate virtual sub-table for each nested element.
In the main fact table, key value pairs become the column names and table elements while embedded sub-documents would become flattened.
However, a nested array would be represented as a virtual sub-table that is linked together with a foreign key to the main table to maintain the relationship between the records.
The foreign keys are not part of the original dataset, but they are generated during schema inference to establish the relationship.
An example of this is shown with users being the main table and cars being the virtual sub-table from the nested array.
Now if your customer asks you how we connect to Big Data // it’s primarily through an ODBC interface
The ODBC driver // translates SQL-92 queries into SQL-LIKE languages such as HiveQL
To achieve the BEST performance possible, // we custom tune the SQL we generate.
We also push down aggregations, filters and other SQL operations // TO the big data platforms // to take ADVANTAGE of their capability to handle LARGE amounts of data
Simba MongoDB ODBC Driver
Translates SQL into Native MongoDB API calls
Performs schema inference to capture relational metadata for JSON – This is to help maps schema-less data to fixed schema
Converts JSON data to relational data
Based on ODBC 3.80 Standard
Full 64-bit and 32-bit support
Full SQL support