"This session focuses on delivering operationally robust deployments of MongoDB via specific design capabilities and varying data feeds. Learn how to use services or driver wrappers to unify design patterns for managing data. This talk will address the following questions:
How do you enforce a schema?
How do you redact or remove sensitive data in queries and feeds?
How do you detect and police ""out of profile"" queries and make sure they do not threaten your system?"
Data Management 2: Conquering Data ProliferationMongoDB
Today's customers demand applications which integrate intelligently with data from mobile, social media and cloud sources. A system of engagement meets these expectations by applying data and analytics drawn from an array of master systems. The enormous scale and performance required overwhelm relational approaches, but we can use MongoDB to meet the challenge. We'll learn to capture and transmit data changes among disparate systems, expose batch data as interactive operational queries and build systems with strong division of concerns, agility and flexibility.
OUG Scotland 2014 - NoSQL and MySQL - The best of both worldsAndrew Morgan
Understand how you can get the benefits you're looking for from NoSQL data stores without sacrificing the power and flexibility of the world's most popular open source database - MySQL.
One of MongoDB’s primary attractions for developers is that it gives them the ability to start application development without needing to define a formal, up-front schema. Operations teams appreciate the fact that they don't need to perform a time-consuming schema upgrade operation every time the developers need to store a different attribute.
Some projects reach a point where it's necessary to define rules on what's being stored in the database. This webinar explains how MongoDB 3.2 allows that document validation work to be performed by the database rather than in the application code.
This webinar focuses on the benefits of using document validation: how to set up the rules using the familiar MongoDB Query Language and how to safely roll it out into an existing, mature production environment.
Database Trends for Modern Applications: Why the Database You Choose Matters MongoDB
Matt Kalan, Senior Solutions Architect, MongoDB
Matt will explain how modern technology requirements have changed the requirements of the database. In order to handle agile development, big data, cloud, APIs, continuous availability, and unlimited scale while lowering costs, new capabilities are required. Do you need to tolerate the impedance mismatch between an object model and the relational model, or is there another way? We will walk through the application development process, to the code level, to compare using an RDBMS with MongoDB.
One of MongoDB’s primary appeals to developers is that it gives them the ability to start application development without needing to define a formal, up-front schema. Operations teams appreciate the fact that they don't need to perform a time-consuming schema upgrade operation every time the developers need to store a different attribute (as an example, The Weather Channel is now able to launch new features in hours whereas it used to take weeks). For business leaders, the application gets launched much faster, and new features can be rolled out more frequently. MongoDB powers agility.
Some projects reach a point where it's necessary to define rules on what's being stored in the database – for example, that for any document in a particular collection, you can be assured that certain attributes are present.
To address the challenges discussed above, while at the same time maintaining the benefits of a dynamic schema, MongoDB 3.2 introduces document validation.
There is significant flexibility to customize which parts of the documents are **and are not** validated for any collection.
Webinar: Building Your First App with MongoDB and JavaMongoDB
The document discusses building Java applications that use MongoDB as the database. It covers connecting to MongoDB from Java using the driver, designing schemas for embedded documents and arrays, building Java objects to represent and insert data, and performing basic operations like inserts. The document also mentions using an object-document mapper like Morphia to simplify interactions between Java objects and MongoDB documents.
MongoDB .local Chicago 2019: Practical Data Modeling for MongoDB: TutorialMongoDB
For 30 years, developers have been taught that relational data modeling was THE way to model, but as more companies adopt MongoDB as their data platform, the approaches that work well in relational design actually work against you in a document model design. In this talk, we will discuss how to conceptually approach modeling data with MongoDB, focusing on practical foundational techniques, paired with tips and tricks, and wrapping with discussing design patterns to solve common real world problems.
Data Management 2: Conquering Data ProliferationMongoDB
Today's customers demand applications which integrate intelligently with data from mobile, social media and cloud sources. A system of engagement meets these expectations by applying data and analytics drawn from an array of master systems. The enormous scale and performance required overwhelm relational approaches, but we can use MongoDB to meet the challenge. We'll learn to capture and transmit data changes among disparate systems, expose batch data as interactive operational queries and build systems with strong division of concerns, agility and flexibility.
OUG Scotland 2014 - NoSQL and MySQL - The best of both worldsAndrew Morgan
Understand how you can get the benefits you're looking for from NoSQL data stores without sacrificing the power and flexibility of the world's most popular open source database - MySQL.
One of MongoDB’s primary attractions for developers is that it gives them the ability to start application development without needing to define a formal, up-front schema. Operations teams appreciate the fact that they don't need to perform a time-consuming schema upgrade operation every time the developers need to store a different attribute.
Some projects reach a point where it's necessary to define rules on what's being stored in the database. This webinar explains how MongoDB 3.2 allows that document validation work to be performed by the database rather than in the application code.
This webinar focuses on the benefits of using document validation: how to set up the rules using the familiar MongoDB Query Language and how to safely roll it out into an existing, mature production environment.
Database Trends for Modern Applications: Why the Database You Choose Matters MongoDB
Matt Kalan, Senior Solutions Architect, MongoDB
Matt will explain how modern technology requirements have changed the requirements of the database. In order to handle agile development, big data, cloud, APIs, continuous availability, and unlimited scale while lowering costs, new capabilities are required. Do you need to tolerate the impedance mismatch between an object model and the relational model, or is there another way? We will walk through the application development process, to the code level, to compare using an RDBMS with MongoDB.
One of MongoDB’s primary appeals to developers is that it gives them the ability to start application development without needing to define a formal, up-front schema. Operations teams appreciate the fact that they don't need to perform a time-consuming schema upgrade operation every time the developers need to store a different attribute (as an example, The Weather Channel is now able to launch new features in hours whereas it used to take weeks). For business leaders, the application gets launched much faster, and new features can be rolled out more frequently. MongoDB powers agility.
Some projects reach a point where it's necessary to define rules on what's being stored in the database – for example, that for any document in a particular collection, you can be assured that certain attributes are present.
To address the challenges discussed above, while at the same time maintaining the benefits of a dynamic schema, MongoDB 3.2 introduces document validation.
There is significant flexibility to customize which parts of the documents are **and are not** validated for any collection.
Webinar: Building Your First App with MongoDB and JavaMongoDB
The document discusses building Java applications that use MongoDB as the database. It covers connecting to MongoDB from Java using the driver, designing schemas for embedded documents and arrays, building Java objects to represent and insert data, and performing basic operations like inserts. The document also mentions using an object-document mapper like Morphia to simplify interactions between Java objects and MongoDB documents.
MongoDB .local Chicago 2019: Practical Data Modeling for MongoDB: TutorialMongoDB
For 30 years, developers have been taught that relational data modeling was THE way to model, but as more companies adopt MongoDB as their data platform, the approaches that work well in relational design actually work against you in a document model design. In this talk, we will discuss how to conceptually approach modeling data with MongoDB, focusing on practical foundational techniques, paired with tips and tricks, and wrapping with discussing design patterns to solve common real world problems.
Benefits of Using MongoDB Over RDBMS (At An Evening with MongoDB Minneapolis ...MongoDB
The document summarizes a presentation on MongoDB given in Minneapolis on March 5, 2015. The agenda included a quick overview of MongoDB, benefits of using MongoDB over relational database management systems (RDBMSs), and updates to MongoDB version 3.0. The presentation compared development using SQL versus MongoDB over several days, showing that adding new fields and data structures like lists was much simpler in MongoDB due to its flexible document-based data model compared to the changes required when using SQL and relational databases.
PistonHead's use of MongoDB for AnalyticsAndrew Morgan
Haymarket Media Group is building a reporting and analytics suite called PistonHub to provide dealers and administrators insights into classifieds and stock performance data. PistonHub will aggregate data from various sources like classifieds, calls, emails, and stock information to generate daily statistics for each dealer that can be viewed on a dashboard. This consolidated data will give dealers and sales teams more visibility to help dealers improve performance. The initial feedback on PistonHub has been positive for providing extra insights.
Just a few years ago all software systems were designed to be monoliths running on a single big and powerful machine. But nowadays most companies desire to scale out instead of scaling up, because it is much easier to buy or rent a large cluster of commodity hardware then to get a single machine that is powerful enough. In the database area scaling out is realized by utilizing a combination of polyglot persistence and sharding of data. On the application level scaling out is realized by microservices. In this talk I will briefly introduce the concepts and ideas of microservices and discuss their benefits and drawbacks. Afterwards I will focus on the point of intersection of a microservice based application talking to one or many NoSQL databases. We will try and find answers to these questions: Are the differences to a monolithic application? How to scale the whole system properly? What about polyglot persistence? Is there a data-centric way to split microservices?
The document discusses building a CouchDB application to store human protein data, describing how each protein document would contain information like name, sequence, and other defining features extracted from public databases. It provides an example protein document to demonstrate the type of data that would be stored.
This document provides an overview of MongoDB, a popular NoSQL database. It discusses key features of MongoDB like its schemaless and document-oriented data model. It also covers how MongoDB supports high availability through replica sets and horizontal scaling through sharding. The document aims to help developers understand how MongoDB works and when it may be suitable for different use cases.
1. The document explains Ajax frameworks and functions from the Ajax Gold library. Ajax frameworks contain JavaScript functions that simplify making Ajax requests, reducing code. The getDataReturnText function uses GET to fetch text from a URL, calling a callback function on completion. getDataReturnXml similarly fetches XML. postDataReturnText uses POST to send data to a URL and receive a text response.
1. The document discusses using Ajax to return JavaScript code and objects from a server. Code examples are provided to return a JavaScript function from a PHP file using XMLHttpRequest, and to convert text into a JavaScript object.
2. Methods for using the XMLHttpRequest HEAD method are demonstrated to retrieve header information from the server, such as the server name, date/time, and file modification date.
3. The code is modified to extract only the last modified date from the header, and then further modified to display individual parts of the date like date, month, year, hours, minutes, and seconds.
4. An example is given to check if a URL exists using HEAD requests and XMLHttpRequest.
This document discusses managing transactions across multiple transactional resources like databases and message queues using Apache Camel. It presents different approaches for handling transactions, including using multiple transaction managers, a single transaction manager with policies, and an XA-capable transaction manager with Atomikos wrappers. An error handler route is also demonstrated to handle exceptions.
After a short introduction to the Java driver for MongoDB, we'll have a look at the more abtract persistence frameworks like Morphia, Spring Data, Jongo and Hibernate OGM.
This document discusses MongoDB performance tuning. It emphasizes that performance tuning is an obsession that requires planning schema design, statement tuning, and instance tuning in that order. It provides examples of using the MongoDB profiler and explain functions to analyze statements and identify tuning opportunities like non-covered indexes, unnecessary document scans, and low data locality. Instance tuning focuses on optimizing writes through fast update operations and secondary index usage, and optimizing reads by ensuring statements are tuned and data is sharded appropriately. Overall performance depends on properly tuning both reads and writes.
Development time is wasted as the bulk of the work shifts from adding business features to struggling with the RDBMS. MongoDB, the leading NoSQL database, offers a flexible and scalable solution.
MongoDB is opensource DB, CRUD with MongoDB is not as same with other DB using SQL statements it can be achieved using NoSQL json queries which i have try explained here.
The document describes how to use Ajax techniques to fetch data from a text file and display it on a web page without refreshing the page. It includes:
1) An HTML file with a button that, when clicked, calls a JavaScript function to fetch the data.
2) The JavaScript function uses the XMLHttpRequest object to make an asynchronous GET request to the text file and display the response in a <div> element.
3) It analyzes how the XMLHttpRequest object is used to open a connection, handle the response, and display the fetched data on the page without reloading.
The document describes code for an Ajax program that fetches data from a text file without refreshing the page. It includes:
1. HTML with a button to call the getData() function, which makes an AJAX request and displays the response in a <div>.
2. JavaScript code to create an XMLHttpRequest object and define the getData() function. This function opens a GET request, defines an onreadystatechange handler to process the response, and sends the request.
3. An analysis of the code, explaining how it works step-by-step, including creating the XMLHttpRequest object, making the asynchronous request, and updating the HTML with the response text.
The document provides an overview of using Java to interact with MongoDB. It discusses connecting to MongoDB, working with collections, inserting and querying documents, using GridFS to store files, the object mapping library Morphia, and how Groovy and the Grails framework can simplify MongoDB development. The key topics covered include making connections, inserting and querying documents, GridFS for file storage, mapping objects with Morphia, dynamic queries in Groovy, and the MongoDB Grails plugin.
This document provides an overview of MongoDB, Java, and Spring Data. It discusses how MongoDB is a document-oriented NoSQL database that uses JSON-like documents with dynamic schemas. It describes how the Java driver can be used to interact with MongoDB to perform CRUD operations. It also explains how Spring Data provides an abstraction layer over the Java driver and allows for object mapping and repository-based queries to MongoDB.
MongoDB + Java - Everything you need to know Norberto Leite
Learn everything you need to know to get started building a MongoDB-based app in Java. We'll explore the relationship between MongoDB and various languages on the Java Virtual Machine such as Java, Scala, and Clojure. From there, we'll examine the popular frameworks and integration points between MongoDB and the JVM including Spring Data and object-document mappers like Morphia.
Speaker: Charlie Swanson, Software Engineer, MongoDB
Level: 200 (Intermediate)
Track: How We Build MongoDB
Learn how MongoDB answers your queries from a query system engineer. If you've ever had a performance problem with a query but didn't know how to find the cause, or if you've ever needed to confirm that your shiny new index is being put to work, the explain command is an excellent place to start. MongoDB's explain system is a powerful tool for solving this type of problem, but can be intimidating and unwieldy to use. In this talk, we will discuss how the explain command works and break down its output into digestible pieces.
What You Will Learn:
- Exactly how indexes are used during your queries and aggregations
- How to diagnose your poorly performing operations
- How to tune your most important operations to ensure that they scale seamlessly
The integration between Spring Framework and MongoDB tends to be somewhat unknown. This presentation shows the different projects that compose Spring ecosystem, Springdata, Springboot, SpringIO etc and how to merge between the pure JAVA projects to massive enterprise systems that require the interaction of these systems together.
Just a few years ago all software systems were designed to be monoliths running on a single big and powerful machine. But nowadays most companies desire to scale out instead of scaling up, because it is much easier to buy or rent a large cluster of commodity hardware then to get a single machine that is powerful enough. In the database area scaling out is realized by utilizing a combination of polyglot persistence and sharding of data. On the application level scaling out is realized by microservices. In this talk I will briefly introduce the concepts and ideas of microservices and discuss their benefits and drawbacks. Afterwards I will focus on the point of intersection of a microservice based application talking to one or many NoSQL databases. We will try and find answers to these questions: Are the differences to a monolithic application? How to scale the whole system properly? What about polyglot persistence? Is there a data-centric way to split microservices?
This document discusses using a multi-model database like ArangoDB for microservices. It explains how ArangoDB can store different data models like key-value, documents, and graphs to support microservices that use different data structures. It provides examples of breaking up a monolithic application into microservices that use different parts of the database, and using Foxx to build REST APIs on top of ArangoDB to integrate microservices.
Benefits of Using MongoDB Over RDBMS (At An Evening with MongoDB Minneapolis ...MongoDB
The document summarizes a presentation on MongoDB given in Minneapolis on March 5, 2015. The agenda included a quick overview of MongoDB, benefits of using MongoDB over relational database management systems (RDBMSs), and updates to MongoDB version 3.0. The presentation compared development using SQL versus MongoDB over several days, showing that adding new fields and data structures like lists was much simpler in MongoDB due to its flexible document-based data model compared to the changes required when using SQL and relational databases.
PistonHead's use of MongoDB for AnalyticsAndrew Morgan
Haymarket Media Group is building a reporting and analytics suite called PistonHub to provide dealers and administrators insights into classifieds and stock performance data. PistonHub will aggregate data from various sources like classifieds, calls, emails, and stock information to generate daily statistics for each dealer that can be viewed on a dashboard. This consolidated data will give dealers and sales teams more visibility to help dealers improve performance. The initial feedback on PistonHub has been positive for providing extra insights.
Just a few years ago all software systems were designed to be monoliths running on a single big and powerful machine. But nowadays most companies desire to scale out instead of scaling up, because it is much easier to buy or rent a large cluster of commodity hardware then to get a single machine that is powerful enough. In the database area scaling out is realized by utilizing a combination of polyglot persistence and sharding of data. On the application level scaling out is realized by microservices. In this talk I will briefly introduce the concepts and ideas of microservices and discuss their benefits and drawbacks. Afterwards I will focus on the point of intersection of a microservice based application talking to one or many NoSQL databases. We will try and find answers to these questions: Are the differences to a monolithic application? How to scale the whole system properly? What about polyglot persistence? Is there a data-centric way to split microservices?
The document discusses building a CouchDB application to store human protein data, describing how each protein document would contain information like name, sequence, and other defining features extracted from public databases. It provides an example protein document to demonstrate the type of data that would be stored.
This document provides an overview of MongoDB, a popular NoSQL database. It discusses key features of MongoDB like its schemaless and document-oriented data model. It also covers how MongoDB supports high availability through replica sets and horizontal scaling through sharding. The document aims to help developers understand how MongoDB works and when it may be suitable for different use cases.
1. The document explains Ajax frameworks and functions from the Ajax Gold library. Ajax frameworks contain JavaScript functions that simplify making Ajax requests, reducing code. The getDataReturnText function uses GET to fetch text from a URL, calling a callback function on completion. getDataReturnXml similarly fetches XML. postDataReturnText uses POST to send data to a URL and receive a text response.
1. The document discusses using Ajax to return JavaScript code and objects from a server. Code examples are provided to return a JavaScript function from a PHP file using XMLHttpRequest, and to convert text into a JavaScript object.
2. Methods for using the XMLHttpRequest HEAD method are demonstrated to retrieve header information from the server, such as the server name, date/time, and file modification date.
3. The code is modified to extract only the last modified date from the header, and then further modified to display individual parts of the date like date, month, year, hours, minutes, and seconds.
4. An example is given to check if a URL exists using HEAD requests and XMLHttpRequest.
This document discusses managing transactions across multiple transactional resources like databases and message queues using Apache Camel. It presents different approaches for handling transactions, including using multiple transaction managers, a single transaction manager with policies, and an XA-capable transaction manager with Atomikos wrappers. An error handler route is also demonstrated to handle exceptions.
After a short introduction to the Java driver for MongoDB, we'll have a look at the more abtract persistence frameworks like Morphia, Spring Data, Jongo and Hibernate OGM.
This document discusses MongoDB performance tuning. It emphasizes that performance tuning is an obsession that requires planning schema design, statement tuning, and instance tuning in that order. It provides examples of using the MongoDB profiler and explain functions to analyze statements and identify tuning opportunities like non-covered indexes, unnecessary document scans, and low data locality. Instance tuning focuses on optimizing writes through fast update operations and secondary index usage, and optimizing reads by ensuring statements are tuned and data is sharded appropriately. Overall performance depends on properly tuning both reads and writes.
Development time is wasted as the bulk of the work shifts from adding business features to struggling with the RDBMS. MongoDB, the leading NoSQL database, offers a flexible and scalable solution.
MongoDB is opensource DB, CRUD with MongoDB is not as same with other DB using SQL statements it can be achieved using NoSQL json queries which i have try explained here.
The document describes how to use Ajax techniques to fetch data from a text file and display it on a web page without refreshing the page. It includes:
1) An HTML file with a button that, when clicked, calls a JavaScript function to fetch the data.
2) The JavaScript function uses the XMLHttpRequest object to make an asynchronous GET request to the text file and display the response in a <div> element.
3) It analyzes how the XMLHttpRequest object is used to open a connection, handle the response, and display the fetched data on the page without reloading.
The document describes code for an Ajax program that fetches data from a text file without refreshing the page. It includes:
1. HTML with a button to call the getData() function, which makes an AJAX request and displays the response in a <div>.
2. JavaScript code to create an XMLHttpRequest object and define the getData() function. This function opens a GET request, defines an onreadystatechange handler to process the response, and sends the request.
3. An analysis of the code, explaining how it works step-by-step, including creating the XMLHttpRequest object, making the asynchronous request, and updating the HTML with the response text.
The document provides an overview of using Java to interact with MongoDB. It discusses connecting to MongoDB, working with collections, inserting and querying documents, using GridFS to store files, the object mapping library Morphia, and how Groovy and the Grails framework can simplify MongoDB development. The key topics covered include making connections, inserting and querying documents, GridFS for file storage, mapping objects with Morphia, dynamic queries in Groovy, and the MongoDB Grails plugin.
This document provides an overview of MongoDB, Java, and Spring Data. It discusses how MongoDB is a document-oriented NoSQL database that uses JSON-like documents with dynamic schemas. It describes how the Java driver can be used to interact with MongoDB to perform CRUD operations. It also explains how Spring Data provides an abstraction layer over the Java driver and allows for object mapping and repository-based queries to MongoDB.
MongoDB + Java - Everything you need to know Norberto Leite
Learn everything you need to know to get started building a MongoDB-based app in Java. We'll explore the relationship between MongoDB and various languages on the Java Virtual Machine such as Java, Scala, and Clojure. From there, we'll examine the popular frameworks and integration points between MongoDB and the JVM including Spring Data and object-document mappers like Morphia.
Speaker: Charlie Swanson, Software Engineer, MongoDB
Level: 200 (Intermediate)
Track: How We Build MongoDB
Learn how MongoDB answers your queries from a query system engineer. If you've ever had a performance problem with a query but didn't know how to find the cause, or if you've ever needed to confirm that your shiny new index is being put to work, the explain command is an excellent place to start. MongoDB's explain system is a powerful tool for solving this type of problem, but can be intimidating and unwieldy to use. In this talk, we will discuss how the explain command works and break down its output into digestible pieces.
What You Will Learn:
- Exactly how indexes are used during your queries and aggregations
- How to diagnose your poorly performing operations
- How to tune your most important operations to ensure that they scale seamlessly
The integration between Spring Framework and MongoDB tends to be somewhat unknown. This presentation shows the different projects that compose Spring ecosystem, Springdata, Springboot, SpringIO etc and how to merge between the pure JAVA projects to massive enterprise systems that require the interaction of these systems together.
Just a few years ago all software systems were designed to be monoliths running on a single big and powerful machine. But nowadays most companies desire to scale out instead of scaling up, because it is much easier to buy or rent a large cluster of commodity hardware then to get a single machine that is powerful enough. In the database area scaling out is realized by utilizing a combination of polyglot persistence and sharding of data. On the application level scaling out is realized by microservices. In this talk I will briefly introduce the concepts and ideas of microservices and discuss their benefits and drawbacks. Afterwards I will focus on the point of intersection of a microservice based application talking to one or many NoSQL databases. We will try and find answers to these questions: Are the differences to a monolithic application? How to scale the whole system properly? What about polyglot persistence? Is there a data-centric way to split microservices?
This document discusses using a multi-model database like ArangoDB for microservices. It explains how ArangoDB can store different data models like key-value, documents, and graphs to support microservices that use different data structures. It provides examples of breaking up a monolithic application into microservices that use different parts of the database, and using Foxx to build REST APIs on top of ArangoDB to integrate microservices.
Just a few years ago all software systems were designed to be monoliths running on a single big and powerful machine. But nowadays most companies desire to scale out instead of scaling up, because it is much easier to buy or rent a large cluster of commodity hardware then to get a single machine that is powerful enough. In the database area scaling out is realized by utilizing a combination of polyglot persistence and sharding of data. On the application level scaling out is realized by microservices. In this talk I will briefly introduce the concepts and ideas of microservices and discuss their benefits and drawbacks. Afterwards I will focus on the point of intersection of a microservice based application talking to one or many NoSQL databases. We will try and find answers to these questions: Are the differences to a monolithic application? How to scale the whole system properly? What about polyglot persistence? Is there a data-centric way to split microservices?
Webinar: Applikationsentwicklung mit MongoDB: Teil 5: Reporting & AggregationMongoDB
This document provides an agenda for a MongoDB basics session. It will cover reporting and aggregation options in MongoDB like MapReduce, the Aggregation Framework, and examples of using aggregation for common reports like popular tags, popular articles, and aggregating geospatial data. It also discusses using the aggregation framework pipeline and operators to build these reports and tuning performance with the explain plan.
The document discusses using MongoDB as a log collector. It provides an agenda that includes who the presenter is, how logging is currently done, and ideas for using MongoDB for logging in the future. Specific topics covered include using syslog-ng to send logs to MongoDB, examples of logging Apache traffic, and map-reduce examples for analyzing logs like finding the top 10 IP addresses.
1403 app dev series - session 5 - analyticsMongoDB
This document provides an agenda for a session on reporting and analytics options in MongoDB, including Map Reduce, the Aggregation Framework, and examples using geospatial and text search features. It discusses building reports in an application, tuning aggregation pipelines with explain plans, and computing aggregations on the fly or pre-computing and storing them. The next session will cover operational topics like scaling out, high availability, production preparation, and sizing.
This document discusses using MongoDB as a log collector. It provides examples of storing log data from syslog-ng in MongoDB collections, including filtering and parsing logs. It also gives examples of analyzing the log data through map-reduce to find top IP addresses and provides ideas for other uses like CAPTCHAs, error localization, and analytics.
The document provides an overview of Couchbase, a NoSQL document-oriented database. It discusses key concepts such as Couchbase being non-SQL and schema-less with flexible data models. It also covers Couchbase architecture with peer-to-peer nodes, installation, basic usage through SDKs and web console, data modeling and querying documents through views and N1QL.
This is a presentation given on October 24 by Michael Uzquiano of Cloud CMS (http://www.cloudcms.com) at the MongoDB Boston conference.
In this presentation, we cover Hazelcast - an in-memory data grid that provides distributed object persistence across multiple nodes in a cluster. When backed by MongoDB, objects are naturally written to Mongo by Hazelcast. The integration points are clean and easy to implement.
We cover a few simple cases along with code samples to provide the MongoDB community with some ideas of how to integrate Hazelcast into their own MongoDB Java applications.
For developers new to MongoDB and Node.js, however, some the common design patterns are very different than those of a RDBMS and traditional synchronous languages. Developers learning these technologies together may find it a bit bewildering. In reality, however, these tools fit perfectly together and enable I high degree of developer productivity and application performance.
This webinar will walk developers through common MongoDB development patterns in Node.js, such as efficiently loading data into MongoDB using MongoDB's bulk API, iterating through query results, and managing simultaneous asynchronous MongoDB queries to provide the best possible application performance. Working Node.js and MongoDB examples will be used throughout the presentation.
Codepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash courseSages
Szybkie wprowadzenie do technologii Pig i Hive z ekosystemu Hadoop. Prezentacja wykonana w ramach warsztatów Codepot w dniu 29.08.2015. Prezentacja wykonana przez Radosława Stankiewicza oraz Bartłomieja Tartanusa.
Learn everything you need to know to get started building a MongoDB-based app in Java. We'll explore the relationship between MongoDB and various languages on the Java Virtual Machine such as Java, Scala, and Clojure. From there, we'll examine the popular frameworks and integration points between MongoDB and the JVM including Spring Data and object-document mappers like Morphia.
MongoDB is the trusted document store we turn to when we have tough data store problems to solve. For this talk we are going to go a little bit off the path and explore what other roles we can fit MongoDB into. Others have discussed how to turn MongoDB’s capped collections into a publish/subscribe server. We stretch that a little further and turn MongoDB into a full fledged broker with both publish/subscribe and queue semantics, and a the ability to mix them. We will provide code and a running demo of the queue producers and consumers. Next we will turn to coordination services: We will explore the fundamental features and show how to implement them using MongoDB as the storage engine. Again we will show the code and demo the coordination of multiple applications.
Solutions for bi-directional Integration between Oracle RDMBS & Apache KafkaGuido Schmutz
A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Today’s enterprises have their core systems often implemented on top of relational databases, such as the Oracle RDBMS. Implementing a new solution supporting the digital strategy using Kafka and the ecosystem can not always be done completely separate from the traditional legacy solutions. Often streaming data has to be enriched with state data which is held in an RDBMS of a legacy application. It’s important to cache this data in the stream processing solution, so that It can be efficiently joined to the data stream. But how do we make sure that the cache is kept up-to-date, if the source data changes? We can either poll for changes from Kafka using Kafka Connect or let the RDBMS push the data changes to Kafka. But what about writing data back to the legacy application, i.e. an anomaly is detected inside the stream processing solution which should trigger an action inside the legacy application. Using Kafka Connect we can write to a database table or view, which could trigger the action. But this not always the best option. If you have an Oracle RDBMS, there are many other ways to integrate the database with Kafka, such as Advanced Queueing (message broker in the database), CDC through Golden Gate or Debezium, Oracle REST Database Service (ORDS) and more. In this session, we present various blueprints for integrating an Oracle RDBMS with Apache Kafka in both directions and discuss how these blueprints can be implemented using the products mentioned before.
Solutions for bi-directional integration between Oracle RDBMS and Apache Kafk...confluent
A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Today's enterprises have their core systems often implemented on top of relational databases, such as the Oracle RDBMS. Implementing a new solution supporting the digital strategy using Kafka and the ecosystem can not always be done completely separate from the traditional legacy solutions. Often streaming data has to be enriched with state data which is held in an RDBMS of a legacy application. It's important to cache this data in the stream processing solution, so that It can be efficiently joined to the data stream. But how do we make sure that the cache is kept up-to-date, if the source data changes? We can either poll for changes from Kafka using Kafka Connect or let the RDBMS push the data changes to Kafka. But what about writing data back to the legacy application, i.e. an anomaly is detected inside the stream processing solution which should trigger an action inside the legacy application. Using Kafka Connect we can write to a database table or view, which could trigger the action. But this not always the best option. If you have an Oracle RDBMS, there are many other ways to integrate the database with Kafka, such as Advanced Queueing (message broker in the database), CDC through Golden Gate or Debezium, Oracle REST Database Service (ORDS) and more. In this session, we present various blueprints for integrating an Oracle RDBMS with Apache Kafka in both directions and discuss how these blueprints can be implemented using the products mentioned before.
The document discusses LINQ (Language Integrated Query), which allows querying of data from various sources in .NET using a common language integrated into C# and VB.NET. It covers the context and motivation for LINQ, its architecture and usage with different data sources like XML, relational databases, and web services. It also discusses LINQ query operations, performance considerations, customizations, alternatives to LINQ, and new features in LINQ for .NET Framework 4.0.
Solutions for bi-directional integration between Oracle RDBMS & Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform. A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Data sources flowing into Kafka are often native data streams such as social media streams, telemetry data, financial transactions and many others. But these data stream only contain part of the information. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. To implement new and modern, real-time solutions, an up-to-date view of that information is needed. So how do we make sure that information can flow between the RDBMS and Kafka, so that changes are available in Kafka as soon as possible in near-real-time? This session will present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate and bridging Kafka with Oracle Advanced Queuing (AQ).
The Qt Mobility project is developing new Qt APIs. These APIs will benefit all Qt developers. This presentation shall provide an overview of the APIs and demonstrate the use of some the APIs through an example application. This presentation shall fuel ideas for usage of the new APIs in your own projects.
Presentation by Alex Luddy held during Qt Developer Days 2009.
http://qt.nokia.com/developer/learning/elearning
Wprowadzenie do technologi Big Data i Apache HadoopSages
The document introduces concepts related to Big Data technology including volume, variety, and velocity of data. It discusses Hadoop architecture including HDFS, MapReduce, YARN, and the Hadoop ecosystem. Examples are provided of common Big Data problems and how they can be solved using Hadoop frameworks like Pig, Hive, and Ambari.
Talk about add proxy user in Spark Task execution time given in Spark Summit East 2017 by Jorge López-Malla and Abel Ricon
full video:
https://www.youtube.com/watch?v=VaU1xC0Rixo&feature=youtu.be
Similar to Data Management 3: Bulletproof Data Management (20)
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB
This presentation discusses migrating data from other data stores to MongoDB Atlas. It begins by explaining why MongoDB and Atlas are good choices for data management. Several preparation steps are covered, including sizing the target Atlas cluster, increasing the source oplog, and testing connectivity. Live migration, mongomirror, and dump/restore options are presented for migrating between replicasets or sharded clusters. Post-migration steps like monitoring and backups are also discussed. Finally, migrating from other data stores like AWS DocumentDB, Azure CosmosDB, DynamoDB, and relational databases are briefly covered.
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB
MongoDB Kubernetes operator and MongoDB Open Service Broker are ready for production operations. Learn about how MongoDB can be used with the most popular container orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications. A demo will show you how easy it is to enable MongoDB clusters as an External Service using the Open Service Broker API for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB
Are you new to schema design for MongoDB, or are you looking for a more complete or agile process than what you are following currently? In this talk, we will guide you through the phases of a flexible methodology that you can apply to projects ranging from small to large with very demanding requirements.
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB
Humana, like many companies, is tackling the challenge of creating real-time insights from data that is diverse and rapidly changing. This is our journey of how we used MongoDB to combined traditional batch approaches with streaming technologies to provide continues alerting capabilities from real-time data streams.
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB
Time series data is increasingly at the heart of modern applications - think IoT, stock trading, clickstreams, social media, and more. With the move from batch to real time systems, the efficient capture and analysis of time series data can enable organizations to better detect and respond to events ahead of their competitors or to improve operational efficiency to reduce cost and risk. Working with time series data is often different from regular application data, and there are best practices you should observe.
This talk covers:
Common components of an IoT solution
The challenges involved with managing time-series data in IoT applications
Different schema designs, and how these affect memory and disk utilization – two critical factors in application performance.
How to query, analyze and present IoT time-series data using MongoDB Compass and MongoDB Charts
At the end of the session, you will have a better understanding of key best practices in managing IoT time-series data with MongoDB.
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB
Our clients have unique use cases and data patterns that mandate the choice of a particular strategy. To implement these strategies, it is mandatory that we unlearn a lot of relational concepts while designing and rapidly developing efficient applications on NoSQL. In this session, we will talk about some of our client use cases, the strategies we have adopted, and the features of MongoDB that assisted in implementing these strategies.
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB
Encryption is not a new concept to MongoDB. Encryption may occur in-transit (with TLS) and at-rest (with the encrypted storage engine). But MongoDB 4.2 introduces support for Client Side Encryption, ensuring the most sensitive data is encrypted before ever leaving the client application. Even full access to your MongoDB servers is not enough to decrypt this data. And better yet, Client Side Encryption can be enabled at the "flick of a switch".
This session covers using Client Side Encryption in your applications. This includes the necessary setup, how to encrypt data without sacrificing queryability, and what trade-offs to expect.
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB
MongoDB Kubernetes operator is ready for prime-time. Learn about how MongoDB can be used with most popular orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications.
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB
When you need to model data, is your first instinct to start breaking it down into rows and columns? Mine used to be too. When you want to develop apps in a modern, agile way, NoSQL databases can be the best option. Come to this talk to learn how to take advantage of all that NoSQL databases have to offer and discover the benefits of changing your mindset from the legacy, tabular way of modeling data. We’ll compare and contrast the terms and concepts in SQL databases and MongoDB, explain the benefits of using MongoDB compared to SQL databases, and walk through data modeling basics so you feel confident as you begin using MongoDB.
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB
The document discusses guidelines for ordering fields in compound indexes to optimize query performance. It recommends the E-S-R approach: placing equality fields first, followed by sort fields, and range fields last. This allows indexes to leverage equality matches, provide non-blocking sorts, and minimize scanning. Examples show how indexes ordered by these guidelines can support queries more efficiently by narrowing the search bounds.
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB
Aggregation pipeline has been able to power your analysis of data since version 2.2. In 4.2 we added more power and now you can use it for more powerful queries, updates, and outputting your data to existing collections. Come hear how you can do everything with the pipeline, including single-view, ETL, data roll-ups and materialized views.
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB
The document describes a methodology for data modeling with MongoDB. It begins by recognizing the differences between document and tabular databases, then outlines a three step methodology: 1) describe the workload by listing queries, 2) identify and model relationships between entities, and 3) apply relevant patterns when modeling for MongoDB. The document uses examples around modeling a coffee shop franchise to illustrate modeling approaches and techniques.
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB
Virtual assistants are becoming the new norm when it comes to daily life, with Amazon’s Alexa being the leader in the space. As a developer, not only do you need to make web and mobile compliant applications, but you need to be able to support virtual assistants like Alexa. However, the process isn’t quite the same between the platforms.
How do you handle requests? Where do you store your data and work with it to create meaningful responses with little delay? How much of your code needs to change between platforms?
In this session we’ll see how to design and develop applications known as Skills for Amazon Alexa powered devices using the Go programming language and MongoDB.
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB
aux Core Data, appréciée par des centaines de milliers de développeurs. Apprenez ce qui rend Realm spécial et comment il peut être utilisé pour créer de meilleures applications plus rapidement.
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB
Il n’a jamais été aussi facile de commander en ligne et de se faire livrer en moins de 48h très souvent gratuitement. Cette simplicité d’usage cache un marché complexe de plus de 8000 milliards de $.
La data est bien connu du monde de la Supply Chain (itinéraires, informations sur les marchandises, douanes,…), mais la valeur de ces données opérationnelles reste peu exploitée. En alliant expertise métier et Data Science, Upply redéfinit les fondamentaux de la Supply Chain en proposant à chacun des acteurs de surmonter la volatilité et l’inefficacité du marché.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
2. 2
Part 3 In The Data Management Series
Validating Data
Software Best Practices
Safe Leverage
From Relational
To MongoDB
Conquering
Data Proliferation
Bulletproof
Data Management
ç
Ω
Part
1
Part
2
Part
3
3. 3
Congratulations! At this Point You’ve:
• Created a Data Design
• Migrated Data
• Built a PoC or maybe an App
• Explored Operations
4. 4
The Next Stage: Defend & Leverage!
• Document Validation
• Redaction
• Quality Of Service
5. 5
MongoDB Doesn’t Have These Things
• Document Validation
• Redaction
• Quality Of Service
7. Write Some Code!
1. Focus on interfaces
2. Design for change
3. Keep application, data access layer,
data management logic, and database
i/o well-factored
4. Minimize compile-time binding
8. 8
Starting Point: The Data Access Layer
MongoDB
Java Driver
Data Access
Layer
Application
class DataAccessLayer {
private String authenicatedID;
private String effectiveID;
private Role role;
init() {
MongoClient mc = new MongoClient (args);
DB db = mc.getDB(args);
}
List getTransactions(Map predicate) {
Map mql = doWhateverYouNeed(predicate);
DBCollection coll = db.get(“TX”);
DBCursor c = coll.find(mql);
while(c.hasNext()) {
Map raw = (Map) c.getNext();
Map morphed = myMorphingLogic(raw);
list.add(morphed);
}
return list;
}
}
10. 10
A Query Filters Outbound Data
{$and:[{“name”:”buzz”},{“prefs”:{$exists:true}}]
11. 11
How About Using It To Filter Inbounds?
{$and:[{“name”:”buzz”},{“prefs”:{$exists:true}}]}
12. 12
$exists And $type Already in MQL
{“name”:{$type:2}}
{$or:[{“age”:{$exists:false}}, {“age”:{$type:16}} ]}
{$and: [
{$name: {$type:2}},
{$or:[
{$and:[{"weight”:{$type:16}}, {"height":{$type:16}}]}
,{$and:[{"weight”:{$exists:0}}, {"height":{$exists:0}}]}
]}
])
Ensure “name” exists (because not null) and is a string:
“age” optional but if exists must be a 32bit integer:
“name” required as string and weight and height both
required integers or both not present:
14. 14
A New MQL Validator Module Emerges
class MQLValidator {
ValidationResult validate(Map MQL, Map data)
}
MongoDB
Java Driver
Data Access
Layer
Application
Validator NOT inline to MongoDB driver
• Interface too big to create a façade
• Beware of “tall stacks”
MQLValidator
15. 15
MongoDB
DB Engine
Migrating Capability into MongoDB
MongoDB
Java Driver
MQLValidator
Java
Data Access
Layer
MongoDB
DB Engine
MongoDB
Java Driver
MQLValidator
Java
Data Access
Layer
• Coming in v3.2!
• Investment in validation design preserved
• Validation enforceable through ALL drivers
and languages
MongoDB
Python Driver
Application Application
16. 16
Code For The Future…Today
class DataAccessLayer {
someWriteOperation(Map data) {
if(ValidationEnabledInMongoDBengine) {
collection.insert(data); // Not yet
} else {
Map mql = getMQL(); // we’ll see this shortly!
// {$or:[{“age”:{$exists:false}},
// {“age”:{$type:16}}]}
ValidationResult vr = MQLValidator.validate(mql,data);
if(vr.ok()) {
collection.insert(data);
}
}
}
}
26. 26
The Stack So Far
MongoDB
Java Driver
MQLValidator
Data Access
Layer
Application
ValidatorDBUtils
ValidatorDBUtils populates an MQLValidator object from MongoDB
PQLFilter
27. 27
Representative Example
class DataAccessLayer {
MQLValidator vv = new MQLValidator(); // NOT DB dependent!
init() {
DB db = mongoClient.getDB( ”mydb" );
ValidatorDBUtils.populate(vv, db); // db.validations
}
someWriteOperation(Map data) {
if(ValidationEnabledInMongoDBengine) {
collection.insert(data); // Not yet
} else {
String vn = “appropriateValidationRulesName”;
ValidationResult vr = vv.validate(collname, vn, data))
if(vr.ok()) {
collection.insert(data);
}
}
}
}
29. 29
Concept: Post Query Operations (PQO)
{ ssn: { $hash: model }, birthdate: null }
{$and:[{“name”:”buzz”},{“prefs”:{$exists:true}}]
30. 30
Adopt MQL-like behavior
{“ssn”:null}
{“address”: “XXXX”}
{“ssn”: { $substitute: “ssnmodel” }}
Remove field by setting to null
Redact address with fixed value
Substitute SSN with a different, correct, consistent value
{“counterparty”: { $hash: “MD5” }}
Hash counterparty name to consistent value
31. 31
A New PostQuery Module Emerges
class PostQuery {
process(Map data, Map operations)
}
PostQuery
MongoDB
Java Driver
MQLValidator
Data Access
Layer
Application
ValidatorDBUtils
PQLFilter
41. 41
Representative Example
class DataAccessLayer {
MQLValidator vv = new MQLValidator(); // NOT DB dependent!
PostQuery pp = new PostQuery();
QOS qs = new QOS();
init() {
DB db = mongoClient.getDB( ”mydb" );
ValidatorDBUtils.populate(vv, db);
PQODBUtils.populate(pp, db);
QOSDBUtils.populate(qs, db);
}
someReadOperation(Map pred) {
Map mql = convertToMQL(pred);
String role = getRole(); // somehow
int maxms = qs.getMaxTime(“someReadOperation”, role);
Map data = collection.find(mql).maxtTime(maxms, tu);
String pqon = “appropriatePQORulesName”;
pp.process(collname, pqon, data); // in place update
return data;
}
}
42. 42
QOSDBUtils
A Highly Leveragable Investment
PostQuery
MQLValidator
Data Access
Layer 1
Application1
ValidatorDBUtils
PQLFilter
PQODBUtils
QOS
Application2
Data Access
Layer 2
Application3
Application4
Data Access
Layer 3
Application5
Application6
Reusable For ALL Data Access Layer Logic
43. 43
Not Just Java? Not A Problem
DAL operations have little or no state…
Data and MQL and diagnostics easily
and losslessly converted to and from
JSON…
Can you say … Web Service!
55. 55
The RESTful Provider
class RESTfulProvider implements DataProvider {
init() { // setup HTTP machine:port endpoint
fetch(String collection, Map mql) {
String jsonstr = JSONUtils.toJSON(mql);
String url = construct(collection, jsonstr);
// url is:
http://machine:port/collectionName?op=find&mql=‘{“produc
t”:”cleanser”,”expires”: {$gt: {$date: “20200101”}}}’
HTTPResponse res = call(url);
Map data = JSONUtils.fromJSON(res.getContent());
}
}
Editor's Notes
HELLO!
This is Buzz Moschetti at MongoDB
Buy Subs, goddamit…! :-D
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Important things to consider: a, b ,c
Not appearing in this film today:
Exception/errors and edge condition handling
Options in design WRT class inheritance, per-thread (or more) DAL models vs. static methods, cartridge models, etc.
In particular, we will see
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
In addition to being well factored, this permits the DAL to contain both DB-persisted validation and dynamic, business data driven validation managed by the SAME code set with the SAME expression language.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
NOTE: NO mention of user and role here!
We only define a set of ops by rule name.
Something else has to associate these with users and roles.
ALSO: Nuance between entitlements set up on DB vs. entitlements at “user level.” Consider heathrow airport: You are entitled to see things but if not in home network, you cannot see SSN.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
As mentioned before re. something else associated rules and roles.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Blackout is simple: permit or deny based on any number of factors.
maxTime: Engine time, not wall clock time; a good proxy for actual load on the engine.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
On behalf of all of us at MongoDB , thank you for attending this webinar!
I hope what you saw and heard today gave you some insight and clues into what you might face in your own schema design efforts.
Remember you can always reach out to us at MongoDB for guidance.
With that, code well and be well.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have been submitted.