MongoDB is a NoSQL database that uses a document-oriented data model. It stores data in JSON-like documents within collections, rather than in tables as in relational databases. The document structure can vary from document to document, which makes MongoDB very flexible and useful for rapid application development. MapReduce is a programming paradigm that allows users to distribute computation across large datasets by mapping values to keys, and then reducing the values for each key. MongoDB supports MapReduce to perform distributed computations and aggregations on large datasets efficiently.
This document discusses NoSQL databases and provides an example of using MongoDB to calculate a total sum from documents. Key points:
- MongoDB is a document-oriented NoSQL database where data is stored in JSON-like documents within collections. It uses map-reduce functions to perform aggregations.
- The example shows saving ticket documents with an ID and checkout amount to the tickets collection.
- A map-reduce operation is run to emit the checkout amount from each document. These are summed by the reduce function to calculate a total of 430 across all documents.
Talk i gave at Nosqlday with Giordano Scalzo on March 25th 2011.
It's about how CouchDB can replace a full serverside mvc stack making development of simple web apps a piece of cake
Also
http://federico.galassi.net/
http://www.nosqlday.it/
http://couchdb.apache.org/
Follow me on Twitter!
https://twitter.com/federicogalassi
This document discusses cross-platform support and push notifications in Windows Azure Mobile Services. It explains how to send push notifications to different device platforms including Windows Store, Windows Phone, iOS, and Android. It also discusses using service filters and delegating handlers to intercept requests and responses for custom processing like adding versioning information.
The document is a report on a Visual Programming Assignment (UAS) submitted by Andriyan Dwi P. It details two programming problems solved using C# and Windows Forms. Problem 1 involves creating patterned stars in the console or windows form. Problem 2 creates a database login system with forms for login, data entry, editing, deleting and viewing records from a MySQL database. Screenshots are included showing the outputs and database forms created.
Run MongoDB with Confidence Using MongoDB Management Service (MMS)MongoDB
MongoDB Management Service (MMS) is the application for managing MongoDB, created by the engineers who develop MongoDB. MMS provides visibility into the performance of your cluster, alerting when key metrics are out of range and backup and recovery of your mission critical data. This session will provide you with an overview of MMS, including installation and setup and a walk through of metrics and alerts. Then we'll compare and contrast the various different backup strategies, with a deep dive on using MMS to back up your MongoDB data.
La Banque Nationale de Données Maladies Raresbndmr
Rémy Choquet, Paul Landais, la Banque Nationale de données maladies rares, Groupe de travail 28, Conseil stratégique des industries de santé et Comité stratégique de filière, 18 juin 2014, Paris, France
This document discusses NoSQL databases and provides an example of using MongoDB to calculate a total sum from documents. Key points:
- MongoDB is a document-oriented NoSQL database where data is stored in JSON-like documents within collections. It uses map-reduce functions to perform aggregations.
- The example shows saving ticket documents with an ID and checkout amount to the tickets collection.
- A map-reduce operation is run to emit the checkout amount from each document. These are summed by the reduce function to calculate a total of 430 across all documents.
Talk i gave at Nosqlday with Giordano Scalzo on March 25th 2011.
It's about how CouchDB can replace a full serverside mvc stack making development of simple web apps a piece of cake
Also
http://federico.galassi.net/
http://www.nosqlday.it/
http://couchdb.apache.org/
Follow me on Twitter!
https://twitter.com/federicogalassi
This document discusses cross-platform support and push notifications in Windows Azure Mobile Services. It explains how to send push notifications to different device platforms including Windows Store, Windows Phone, iOS, and Android. It also discusses using service filters and delegating handlers to intercept requests and responses for custom processing like adding versioning information.
The document is a report on a Visual Programming Assignment (UAS) submitted by Andriyan Dwi P. It details two programming problems solved using C# and Windows Forms. Problem 1 involves creating patterned stars in the console or windows form. Problem 2 creates a database login system with forms for login, data entry, editing, deleting and viewing records from a MySQL database. Screenshots are included showing the outputs and database forms created.
Run MongoDB with Confidence Using MongoDB Management Service (MMS)MongoDB
MongoDB Management Service (MMS) is the application for managing MongoDB, created by the engineers who develop MongoDB. MMS provides visibility into the performance of your cluster, alerting when key metrics are out of range and backup and recovery of your mission critical data. This session will provide you with an overview of MMS, including installation and setup and a walk through of metrics and alerts. Then we'll compare and contrast the various different backup strategies, with a deep dive on using MMS to back up your MongoDB data.
La Banque Nationale de Données Maladies Raresbndmr
Rémy Choquet, Paul Landais, la Banque Nationale de données maladies rares, Groupe de travail 28, Conseil stratégique des industries de santé et Comité stratégique de filière, 18 juin 2014, Paris, France
LORD : un outil d'aide au codage des maladies - JFIM - 13 juin 2014bndmr
LORD : un outil d'aide au codage des maladies rares
Présentation de Yannick Fonjallaz aux Journées francophones d'informatique médicale, le 13 juin 2014 à Fès, Maroc.
MongoDB IoT City Tour EINDHOVEN: Analysing the Internet of Things: Davy Nys, ...MongoDB
Drawing on Pentaho's wide experience in solving customers' big data issues, Davy Nys will position the importance of analytics in the IoT:
[-] Understanding the challenges behind data integration & analytics for IoT
[-] Future proofing your information architecture for IoT
[-] Delivering IoT analytics, now and tomorrow
[-] Real customer examples of where Pentaho can help
This document provides a checklist for deploying MongoDB, including application design considerations like schema and sharding, operational requirements for performance, capacity, high availability, backup, security, and monitoring. It also discusses hardware requirements and maintenance processes like upgrades.
Automatisez votre gestion de MongoDB avec MMSMongoDB
MongoDB Management Service (MMS) facilite la vie des équipes opérations en simplifiant les tâches de gestion au quotidien. Vous pouvez désormais tout gérer depuis l’interface MMS : provisionner des serveurs, configurer des replica sets et des clusters, et mettre à jour votre environmment MongoDB. Durant cette session, nous vous présenterons les nouvelles fonctionnalités d’automatisation de MMS. Parmi les démos auxquelles vous pourrez assister : comment provisionner, comment gerer vos utilisateurs, comment ajuster vos clusters, et bien d’autres choses encore.
Présentation donnée au Breizhcamp le 23 juin 2014
Le monitoring d'applications ... pas vraiment hype comme sujet. Et pourtant c'est un domaine en mutation parce que le déploiement continu et la démarche DevOps modifient les échanges d'informations avec la production et aussi parce qu'il est maintenant possible stocker massivement les informations collectées. Je vous propose d'explorer ces sujets autour de quelques exemples.
Plus de flexibilité et de scalabilité chez Bouygues Télécom grâce à MongoDBMongoDB
Comme de nombreux opérateur Bouygues télécom dispose d'un annuaire des services de ses clients. Ce système est critique pour réaliser les paiements sur facture des abonnées, s'authentifier sur sa boite de messagerie, regarder la télévision en streaming et bien d'autres services. Il y a quelque années une solution du marché avait été choisie. Après de nombreux problèmes - de performances et de trop grande rigidé du modèle - ce systême a été remplacé par un dévelopement spécifique architecturé autour de MongoDB, Apache Storm et Apache Tomcat. Cette présentation retrace l'histoire de cette refonte et les écueils rencontrés puis surmontés pour mettre en place un système disponible à 99,9% avec des sollicitations pouvant aller jusqu'à 3000 req/s. Nous parlerons de construction de modèle, de devops et aussi de topologie storm.
L\'authentification forte : Concept et TechnologiesIbrahima FALL
Présenter le concept d’authentification forte avec quelques techniques de mise en oeuvre pour illustration.
Site de l’exposé : http://www-igm.univ-mlv.fr/~dr/XPOSE2010/authentificationForte/index.php#introduction
Supervision de réseau informatique - NagiosAziz Rgd
L’installation de Nagios 3.5.0
Pré-requis
Avant de commencer l’installation de Nagios, on commence par mettre à jour le système:
# sudo apt-get update
# sudo apt-get upgrade
Il faut dans un premier temps installer le package « build-essential » qui comporte les librairies de développement de bases:
# sudo apt-get install build-essential
Nagios utilise une interface Web pour interagir avec les utilisateurs. Il faut donc installer un serveur Web sur notre serveur de supervision.
On va utiliser Apache (version 2):
# sudo apt-get install apache2 wget rrdtool bsd-mailx librrds-perl libapache2-mod-php5 php5 php-pear php5-gd php5-ldap php5-snmp libperl-dev
Certaine librairie sont également nécessaires au bon fonctionnement de Nagios et de ces plugins :
# sudo apt-get install bind9-host dnsutils libbind9-80 libdns81 libisc83 libisccc80 libisccfg82 liblwres80 libradius1 qstat radiusclient1 snmp snmpd
Pour tester votre serveur Web, il faut commencer par le lancer…
# sudo apache2ctl start
On test si apache fonctionne, pour cela, ouvrez votre navigateur Internet et entrez votre adresse IP. Dans mon cas c’est 10.0.0.15.
On installe les librairies qui serviront à Nagios pour afficher de beaux diagrammes réseau:
# sudo apt-get install libgd2-noxpm-dev libpng12-dev libjpeg62 libjpeg62-dev
On installe MySQL .
# sudo apt-get install mysql-server
# sudo apt-get install php5-mysql
# sudo apt-get install libmysqlclient15-dev
Pour des raisons de sécurité, le processus Nagios ne sera pas lancé en root. Nous allons donc créer un utilisateur système nagios et un groupe nagios.
# sudo /usr/sbin/useradd nagios
# sudo passwd nagios
# sudo /usr/sbin/groupadd nagios
# sudo /usr/sbin/usermod -G nagios nagios
# sudo /usr/sbin/usermod -G nagios www-data
Téléchargement de Nagios et des plugins Nagios
Avant d’installer Nagios, allez sur le site afin de télécharger la dernière version de Nagios et la dernière version des plugins Nagios.
Dans notre documentation, nous utiliserons Nagios 3.5.0 et plugins Nagios 1.4.16.
Ensuite, on télécharge ces versions sur notre serveur
# sudo cd /usr/src
# sudo wget http://surfnet.dl.sourceforge.net/sourceforge/nagios/nagios-3.5.0.tar.gz
# sudo wget http://kent.dl.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.16.tar.gz
Compilation depuis les sources
On commence par décompresser les sources:
# sudo tar xzf nagios-3.5.0.tar.gz
# sudo cd nagios
Nous allons lancer la compilation grâce aux commandes suivantes:
# sudo ./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-command-user=nagios --with-command-group=nagios --enable-event-broker --enable-nanosleep --enable-embedded-perl --with-perlcache
# sudo make all
# sudo make fullinstall
# sudo make install-config
On installe ensuite le script de démarrage (pour que Nagios se lance automatique)
MongoDB Schema Design: Practical Applications and ImplicationsMongoDB
Presented by Austin Zellner, Solutions Architect, MongoDB
Schema design is as much art as it is science, but it is central to understanding how to get the most out of MongoDB. Attendees will walk away with an understanding of how to approach schema design, what influences it, and the science behind the art. After this session, attendees will be ready to design new schemas, as well as re-evaluate existing schemas with a new mental model.
Présentation de ElasticSearch / Digital apéro du 12/11/2014Silicon Comté
ElasticSearch est un moteur de recherche open source reposant sur une interface JSON, pouvant fonctionner en mode distribué et interrogeable facilement via son API REST. Cédric Nirousset, Développeur web indépendant, vous fera découvrir les intérêts de l’utiliser dans vos applications à travers quelques exemples pratiques.
A propos de l’intervenant : Cédric Nirousset, diplomé dut DUT SRC Montébliard en 2006 et de l’UTBM en Informatique en 2010, il est maintenant développeur web indépendant à Besançon, travaillant pour des entreprises de toutes tailles et tous horizons. Suivez Cédric sur Twitter @Nyr0
Tirer le meilleur de ses données avec ElasticSearchSéven Le Mesle
Qu'est-ce qu'un moteur de recherche ? Qu'est-ce qu'ElasticSearch ? Comment l'utiliser dans le monde réel et peut-on aller plus loin que la recherche full texte ?
Meet the Experts: Visualize Your Time-Stamped Data Using the React-Based Gira...InfluxData
This document discusses Giraffe, a React-based library for visualizing time-series data from InfluxData. It provides examples of using Giraffe to visualize data exported from Flux queries in InfluxData by converting the data to layers in Giraffe configurations. The document also contains code examples for connecting to InfluxData and executing Flux queries to export data to visualize in Giraffe.
MongoDB for Time Series Data: Analyzing Time Series Data Using the Aggregatio...MongoDB
This document discusses using time series data from traffic sensors to monitor road conditions and support navigation systems. It reviews using MongoDB to store sensor data and perform aggregations to calculate metrics like average speed. MapReduce and the aggregation framework are demonstrated for queries like calculating average speeds by weather, road status, or pavement conditions. Hadoop and the MongoDB connector for Hadoop are mentioned for processing large datasets in parallel across nodes.
The document summarizes the architecture of Airbnb's search system. It uses Lucene for indexing listings, with replicas of the index distributed across multiple servers. Real-time updates from data sources are propagated to all search nodes using Kafka. A custom-built forward index stores metadata for complex filtering and ranking. The system handles over 1.2 million listings worldwide with low latency search and real-time updates.
Operational Intelligence with MongoDB WebinarMongoDB
This document discusses using MongoDB for operational intelligence and real-time analytics of log and event data. It describes how MongoDB can ingest large volumes of data from multiple sources at high write volumes. Queries can then be performed rapidly to analyze the data and drill down into specific events. The aggregation framework is used to generate rollups and reports from the data on-demand or on a scheduled basis.
1) MongoDB is used to collect analytics data from GitHub pages in real-time with over 10-15 million page views per day stored across 13 servers.
2) Data is stored in a denormalized manner across multiple collections to optimize for space, RAM, and read performance while live querying is supported.
3) As data volume grows over time, the data will need to be partitioned either by time frame, functionality, or individual servers to support the increased load.
LORD : un outil d'aide au codage des maladies - JFIM - 13 juin 2014bndmr
LORD : un outil d'aide au codage des maladies rares
Présentation de Yannick Fonjallaz aux Journées francophones d'informatique médicale, le 13 juin 2014 à Fès, Maroc.
MongoDB IoT City Tour EINDHOVEN: Analysing the Internet of Things: Davy Nys, ...MongoDB
Drawing on Pentaho's wide experience in solving customers' big data issues, Davy Nys will position the importance of analytics in the IoT:
[-] Understanding the challenges behind data integration & analytics for IoT
[-] Future proofing your information architecture for IoT
[-] Delivering IoT analytics, now and tomorrow
[-] Real customer examples of where Pentaho can help
This document provides a checklist for deploying MongoDB, including application design considerations like schema and sharding, operational requirements for performance, capacity, high availability, backup, security, and monitoring. It also discusses hardware requirements and maintenance processes like upgrades.
Automatisez votre gestion de MongoDB avec MMSMongoDB
MongoDB Management Service (MMS) facilite la vie des équipes opérations en simplifiant les tâches de gestion au quotidien. Vous pouvez désormais tout gérer depuis l’interface MMS : provisionner des serveurs, configurer des replica sets et des clusters, et mettre à jour votre environmment MongoDB. Durant cette session, nous vous présenterons les nouvelles fonctionnalités d’automatisation de MMS. Parmi les démos auxquelles vous pourrez assister : comment provisionner, comment gerer vos utilisateurs, comment ajuster vos clusters, et bien d’autres choses encore.
Présentation donnée au Breizhcamp le 23 juin 2014
Le monitoring d'applications ... pas vraiment hype comme sujet. Et pourtant c'est un domaine en mutation parce que le déploiement continu et la démarche DevOps modifient les échanges d'informations avec la production et aussi parce qu'il est maintenant possible stocker massivement les informations collectées. Je vous propose d'explorer ces sujets autour de quelques exemples.
Plus de flexibilité et de scalabilité chez Bouygues Télécom grâce à MongoDBMongoDB
Comme de nombreux opérateur Bouygues télécom dispose d'un annuaire des services de ses clients. Ce système est critique pour réaliser les paiements sur facture des abonnées, s'authentifier sur sa boite de messagerie, regarder la télévision en streaming et bien d'autres services. Il y a quelque années une solution du marché avait été choisie. Après de nombreux problèmes - de performances et de trop grande rigidé du modèle - ce systême a été remplacé par un dévelopement spécifique architecturé autour de MongoDB, Apache Storm et Apache Tomcat. Cette présentation retrace l'histoire de cette refonte et les écueils rencontrés puis surmontés pour mettre en place un système disponible à 99,9% avec des sollicitations pouvant aller jusqu'à 3000 req/s. Nous parlerons de construction de modèle, de devops et aussi de topologie storm.
L\'authentification forte : Concept et TechnologiesIbrahima FALL
Présenter le concept d’authentification forte avec quelques techniques de mise en oeuvre pour illustration.
Site de l’exposé : http://www-igm.univ-mlv.fr/~dr/XPOSE2010/authentificationForte/index.php#introduction
Supervision de réseau informatique - NagiosAziz Rgd
L’installation de Nagios 3.5.0
Pré-requis
Avant de commencer l’installation de Nagios, on commence par mettre à jour le système:
# sudo apt-get update
# sudo apt-get upgrade
Il faut dans un premier temps installer le package « build-essential » qui comporte les librairies de développement de bases:
# sudo apt-get install build-essential
Nagios utilise une interface Web pour interagir avec les utilisateurs. Il faut donc installer un serveur Web sur notre serveur de supervision.
On va utiliser Apache (version 2):
# sudo apt-get install apache2 wget rrdtool bsd-mailx librrds-perl libapache2-mod-php5 php5 php-pear php5-gd php5-ldap php5-snmp libperl-dev
Certaine librairie sont également nécessaires au bon fonctionnement de Nagios et de ces plugins :
# sudo apt-get install bind9-host dnsutils libbind9-80 libdns81 libisc83 libisccc80 libisccfg82 liblwres80 libradius1 qstat radiusclient1 snmp snmpd
Pour tester votre serveur Web, il faut commencer par le lancer…
# sudo apache2ctl start
On test si apache fonctionne, pour cela, ouvrez votre navigateur Internet et entrez votre adresse IP. Dans mon cas c’est 10.0.0.15.
On installe les librairies qui serviront à Nagios pour afficher de beaux diagrammes réseau:
# sudo apt-get install libgd2-noxpm-dev libpng12-dev libjpeg62 libjpeg62-dev
On installe MySQL .
# sudo apt-get install mysql-server
# sudo apt-get install php5-mysql
# sudo apt-get install libmysqlclient15-dev
Pour des raisons de sécurité, le processus Nagios ne sera pas lancé en root. Nous allons donc créer un utilisateur système nagios et un groupe nagios.
# sudo /usr/sbin/useradd nagios
# sudo passwd nagios
# sudo /usr/sbin/groupadd nagios
# sudo /usr/sbin/usermod -G nagios nagios
# sudo /usr/sbin/usermod -G nagios www-data
Téléchargement de Nagios et des plugins Nagios
Avant d’installer Nagios, allez sur le site afin de télécharger la dernière version de Nagios et la dernière version des plugins Nagios.
Dans notre documentation, nous utiliserons Nagios 3.5.0 et plugins Nagios 1.4.16.
Ensuite, on télécharge ces versions sur notre serveur
# sudo cd /usr/src
# sudo wget http://surfnet.dl.sourceforge.net/sourceforge/nagios/nagios-3.5.0.tar.gz
# sudo wget http://kent.dl.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.16.tar.gz
Compilation depuis les sources
On commence par décompresser les sources:
# sudo tar xzf nagios-3.5.0.tar.gz
# sudo cd nagios
Nous allons lancer la compilation grâce aux commandes suivantes:
# sudo ./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-command-user=nagios --with-command-group=nagios --enable-event-broker --enable-nanosleep --enable-embedded-perl --with-perlcache
# sudo make all
# sudo make fullinstall
# sudo make install-config
On installe ensuite le script de démarrage (pour que Nagios se lance automatique)
MongoDB Schema Design: Practical Applications and ImplicationsMongoDB
Presented by Austin Zellner, Solutions Architect, MongoDB
Schema design is as much art as it is science, but it is central to understanding how to get the most out of MongoDB. Attendees will walk away with an understanding of how to approach schema design, what influences it, and the science behind the art. After this session, attendees will be ready to design new schemas, as well as re-evaluate existing schemas with a new mental model.
Présentation de ElasticSearch / Digital apéro du 12/11/2014Silicon Comté
ElasticSearch est un moteur de recherche open source reposant sur une interface JSON, pouvant fonctionner en mode distribué et interrogeable facilement via son API REST. Cédric Nirousset, Développeur web indépendant, vous fera découvrir les intérêts de l’utiliser dans vos applications à travers quelques exemples pratiques.
A propos de l’intervenant : Cédric Nirousset, diplomé dut DUT SRC Montébliard en 2006 et de l’UTBM en Informatique en 2010, il est maintenant développeur web indépendant à Besançon, travaillant pour des entreprises de toutes tailles et tous horizons. Suivez Cédric sur Twitter @Nyr0
Tirer le meilleur de ses données avec ElasticSearchSéven Le Mesle
Qu'est-ce qu'un moteur de recherche ? Qu'est-ce qu'ElasticSearch ? Comment l'utiliser dans le monde réel et peut-on aller plus loin que la recherche full texte ?
Meet the Experts: Visualize Your Time-Stamped Data Using the React-Based Gira...InfluxData
This document discusses Giraffe, a React-based library for visualizing time-series data from InfluxData. It provides examples of using Giraffe to visualize data exported from Flux queries in InfluxData by converting the data to layers in Giraffe configurations. The document also contains code examples for connecting to InfluxData and executing Flux queries to export data to visualize in Giraffe.
MongoDB for Time Series Data: Analyzing Time Series Data Using the Aggregatio...MongoDB
This document discusses using time series data from traffic sensors to monitor road conditions and support navigation systems. It reviews using MongoDB to store sensor data and perform aggregations to calculate metrics like average speed. MapReduce and the aggregation framework are demonstrated for queries like calculating average speeds by weather, road status, or pavement conditions. Hadoop and the MongoDB connector for Hadoop are mentioned for processing large datasets in parallel across nodes.
The document summarizes the architecture of Airbnb's search system. It uses Lucene for indexing listings, with replicas of the index distributed across multiple servers. Real-time updates from data sources are propagated to all search nodes using Kafka. A custom-built forward index stores metadata for complex filtering and ranking. The system handles over 1.2 million listings worldwide with low latency search and real-time updates.
Operational Intelligence with MongoDB WebinarMongoDB
This document discusses using MongoDB for operational intelligence and real-time analytics of log and event data. It describes how MongoDB can ingest large volumes of data from multiple sources at high write volumes. Queries can then be performed rapidly to analyze the data and drill down into specific events. The aggregation framework is used to generate rollups and reports from the data on-demand or on a scheduled basis.
1) MongoDB is used to collect analytics data from GitHub pages in real-time with over 10-15 million page views per day stored across 13 servers.
2) Data is stored in a denormalized manner across multiple collections to optimize for space, RAM, and read performance while live querying is supported.
3) As data volume grows over time, the data will need to be partitioned either by time frame, functionality, or individual servers to support the increased load.
MongoDB for Time Series Data Part 2: Analyzing Time Series Data Using the Agg...MongoDB
The United States will be deploying 16,000 traffic speed monitoring sensors - 1 on every mile of US interstate in urban centers. These sensors update the speed, weather, and pavement conditions once per minute. MongoDB will collect and aggregate live sensor data feeds from roadways around the country, support real-time queries from cars on traffic conditions on their route as well as be the platform for real-time dashboards displaying traffic conditions and more complex analytical queries used to identify traffic trends. In this session, we’ll implement a few different data aggregation techniques to query and dashboard the metrics gathered from the US interstate.
The document discusses how ArcGIS can be used to ingest, visualize, analyze, and share scientific data stored in formats like netCDF, HDF, and GRIB, including directly reading these files, creating multidimensional mosaics for aggregation, analyzing spatial and temporal patterns, publishing services and maps, and extending capabilities through Python tools and custom geoprocessing. ArcGIS supports the full scientific data workflow from ingesting data to sharing final results and apps on the web and with other platforms like WMS and Dapple Earth Explorer.
The document discusses RxJS, a library for reactive programming using Observables that provide an API for asynchronous programming with observable streams. It provides code examples of using RxJS to handle events, AJAX requests, and animations as Observables. It also compares RxJS to Promises and native JavaScript event handling, and lists several frameworks that use RxJS like Angular and Redux.
GeoServer 2.2 includes new capabilities for time and elevation support for both vectors and rasters. It also introduces rendering transformations, paging and stored queries for WFS 2.0, asynchronous WPS calls, and improved security features like LDAP authentication. Additional updates involve referencing support, an image collection store, virtual services, and expanded scripting abilities.
The document discusses techniques for writing readable code, including:
- Code should be easy for others to understand by using clear naming conventions, comments only where needed, and simple control flow.
- Surface-level readability can be improved through specific and unambiguous naming, consistent formatting, and avoiding overly long or generic names.
- Loops and logic should read like natural language to make the flow of execution easy to follow. This includes ordering conditional statements positively first and breaking down complex expressions.
- Code can be made more scannable through proper indentation and grouping of related lines together into blocks. Overall the goal is to minimize the time it takes someone new to understand the code.
IT Days - Parse huge JSON files in a streaming way.pptxAndrei Negruti
Everyone uses JSON files. Thankfully, most of the time the JSON files we use are small and we can always just read and process everything in memory because it is convenient and easy to do. Most of the time it is not all the time. Sometimes you must process big JSON files and the moment you try to do this the old-fashioned way you are soon going to see the dreadful “java.lang.OutOfMemoryError.” One search on the internet and you will find solutions to this problem. Concisely you will see a variation of these answers:
Split your file into smaller ones Increase max memory used (yes, this is one of the answers)
Save the JSON in a temporary file and use the streaming capabilities of GSON or Jackson.
GSON or Jackson work well but they require you to write a lot of boilerplate code and get your hands dirty with lots of tokens, if checks, path checks etc. We developed a fourth option, and we were able to abstract away what Jackson can do and create an interface that is easy to understand and interact with. With its help we managed to deliver increased performance, reduce the memory we need to run our service by more than 50% while also being able to translate an infinite number of paragraphs because now we no longer have the entire file in memory.
Scaling Up: How Switching to Apache Spark Improved Performance, Realizability...Databricks
This document summarizes how switching from Hadoop to Spark for data science applications improved performance, reliability, and reduced costs at Salesforce. Some key issues addressed were handling large datasets across many S3 prefixes, efficiently computing segment overlap on skewed user data, and performing joins on highly skewed datasets. These changes resulted in applications that were 100x faster, used 10x less data, had fewer failures, and reduced infrastructure costs.
This document summarizes how switching from Hadoop to Spark for data science applications improved performance, reliability, and reduced costs at Salesforce. Some key issues addressed were handling large datasets across many S3 prefixes, efficiently computing segment overlap on skewed user data, and performing joins on highly skewed datasets. These changes resulted in applications that were 100x faster, used 10x less data, had fewer failures, and reduced infrastructure costs.
The document discusses how to use RxJS (Reactive Extensions library for JavaScript) to treat events like arrays by leveraging Observable types and operators. It explains key differences between Observables and Promises/Arrays, how Observables are lazy and cancelable unlike Promises. Various RxJS operators like map, filter, interval and fromEvent are demonstrated for transforming and composing Observable streams. The document aims to illustrate how RxJS enables treating events as collections that can be processed asynchronously over time.
How to Hack a Road Trip with a Webcam, a GSP and Some Fun with Nodepdeschen
Part of a presentation @ nodemtl meetup. Presenting Kerouac, a real-time webapp featuring a remote GPS tracking device, a webcam and a whole lot of Node.js magic covering some basics of Node.js such as: event emitters and process spawning.
D3.js - A picture is worth a thousand wordsApptension
This document provides an overview of D3.js, a JavaScript library for data visualization. It discusses why data visualization is useful, some key concepts in D3 like selections, entering and updating data, and creating reusable components. It also covers transitions, scales, axes, SVG, and common layouts. The document encourages exploring more examples on the bl.ocks website and concludes by thanking the audience.
HTML5 is all the rage with the cool kids, and although there’s a lot of focus on the new language, there’s plenty for web app developers with new JavaScript APIs both in the HTML5 spec and separated out as their own W3C specifications. This session will take you through demos and code and show off some of the outright crazy bleeding edge demos that are being produced today using the new JavaScript APIs. But it’s not all pie in the sky – plenty is useful today, some even in Internet Explorer!
Fun with D3.js: Data Visualization Eye Candy with Streaming JSONTomomi Imura
The document discusses creating dynamic bubble charts using D3.js and streaming JSON data from PubNub. It explains how to (1) create a static bubble chart with D3, (2) make the chart dynamic by subscribing to a PubNub data stream and updating the bubbles on new data, and (3) add smooth transitions as bubbles enter, update, and exit using D3's data binding and transition methods. The full article provides more details on implementing this dynamic bubble chart with animated transitions between data updates.
Similar to Apéro RubyBdx - MongoDB - 8-11-2011 (20)
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
3. Qu’est ce que mongoDB ?
mongoDB est une base de donnée
de type NoSQL,
sans schéma
document-oriented
4. sans-schéma
• Très utile en développements
‘agiles’ (itérations, rapidité de modifications,
flexibilité pour les développeurs)
• Supporte des fonctionnalités qui seraient, en
BDDs relationnelles :
• quasi-impossible (stockage d’éléments non finis, ex. tags)
• trop complexes pour ce qu’elles sont (migrations)
5. document-oriented
• mongoDB stocke des documents, pas de
rows
• les documents sont stockés sous forme de
JSON; binary JSON
• la syntaxe de requêtage est aussi fournie que
SQL
• le mécanisme de documents ‘embedded’
résout bon nombre de problèmes rencontrés
6. document-oriented
• Les documents sont stockés dans une
collection, en RoR = model
• une partie des ces données sont indexées
pour optimiser les performances
• un document n’est pas une poubelle !
7. stockage de données
volumineuses
• mongoDB (et autres NoSQL) sont plus
performantes pour la scalabilité horizontale
• ajout de serveurs pour augmenter la capacité
de stockage («sharding»)
• garantissant ainsi une meilleur disponibilité
• load-balancing optimisé entre les nodes
• augmentation transparente pour l’application
8. Cas pratique
• ORM devient ODM, la gem de référence mongoid
• ou : mongoMapper, DataMapper
• Création d’une application a base de NoSQL MongoDB
• rails new nosql
• edition du Gemfile
• gem ‘mongoid’
• gem ‘bson_ext’
• bundle install
• rails generate mongoid:config
13. Problematic
• We want to
• Calculate the ‘checkout’ sum of each object in our
ticket’s collection
• Be able to distribute this operation over the network
• Be fast!
• We don’t want to
• Go over all objects again when an update is made
14. Map : emit(checkout)
The ‘map’ function emit (select) every checkout value
of each object in our collection
100 42 215 73
{ { { {
“id” : 1, “id” : 2, “id” : 3, “id” : 4,
“day” : 20111017, “day” : 20111017, “day” : 20111017, “day” : 20111017,
“checkout” : 100 “checkout” : 42 “checkout” : 215 “checkout” : 73
} } } }
16. Reduce function
The ‘reduce’ function apply the algorithmic logic
for each key/value received from ‘map’ function
This function has to be ‘idempotent’ to be called
recursively or in a distributed system
reduce(k, A, B) == reduce(k, B, A)
reduce(k, A, B) == reduce(k, reduce(A, B))
18. Distributed
Since ‘map’ function emits objects to be reduced
and ‘reduce’ function processes for each emitted
objects independently, it can be distributed
through multiple workers.
map reduce
19. Logaritmic Update
For the same reason, when updating an object, we
don’t have to reprocess for each obejcts.
We can call ‘map’ function only on updated
objects.
26. > var map = function() {
... emit(null, this.checkout)
}
> var reduce = function(key, values) {
... var sum = 0
... for (var index in values) sum += values[index]
... return sum
}
30. > var map = function() {
... emit(this.date, this.checkout)
}
> var reduce = function(key, values) {
... var sum = 0
... for (var index in values) sum += values[index]
... return sum
}