Speaker: Robyn Allen, Software Engineer, Central Inventions
Level: 100 (Beginner)
Track: Tutorials
To provide a hands-on opportunity to work with real data, this session will center around a web-hosted quiz application which helps students practice math and memorize vocabulary. After experimenting with a small demonstration dataset (generated by each individual during the workshop), attendees will be guided through working with an anonymized dataset in MongoDB. No prior MongoDB experience is required but attendees are expected to download and install MongoDB Community Edition (available for free from mongodb.com) and have a working Python 3 environment of their choice (e.g., IDLE, free from python.org) installed on a laptop they bring to the workshop.
Prerequisites:
Attendees are expected to bring a laptop with the following software installed:
MongoDB 3.4.x Community Edition
The text editor or IDE of their choice
A working Python 3 environment of their choice
No prior MongoDB experience is required.
What You Will Learn:
- How to load a CSV file into MongoDB using mongoimport and then write queries (using the Mongo shell) to ensure the data appears as expected. Attendees will use a demo version of an online quiz app to generate a small data file of raw session data (which can be accessed via http://strawnoodle.com/api/testdata after logging in to the demo app and answering one or more quiz questions about MongoDB). After studying how the demo app stores session data, attendees will practice using mongoimport to import anonymized session data (provided during the workshop) into MongoDB.
- How to use the aggregation pipeline (in PyMongo) to implement more complicated queries and gain insights from data. Because the sample dataset contains data from a variety of users of different skill levels, queries can be designed which reveal summary statistics for the anonymous user cohort or specific performance of individual users. Participants will receive instruction in using MongoDB aggregation pipelines in order to write powerful, efficient queries with very few lines of code.
- How to write queries to analyze sample data from an online quiz app. Once the sample data has been loaded into MongoDB, participants will be guided in writing basic queries to examine the sample data. Participants will have an opportunity to write queries in the Mongo shell and in Python in order to familiarize themselves with syntax variations and key ideas. Participants will learn how to implement CRUD operations in PyMongo.
It's 10pm: Do You Know Where Your Writes Are?MongoDB
Speaker: Samantha Ritter, Software Engineer, MongoDB
Level: 200 (Intermediate)
Track: How We Build MongoDB
MongoDB 3.6 delivers three new features to help you develop resilient applications: retriable writes, a cluster-wide killOp command, and zombie cursor cleanup. These features share a common base, an idea called a logical session. This new cluster-wide concept of user state is the quiet magic that allows you to know, with certainty, the status of your operations. MongoDB engineer Samantha Ritter will describe the above features in-depth, discuss when and how logical sessions can be used by applications and administrators, and show you how we implemented sessions for large, distributed systems.
What You Will Learn:
- What logical sessions are and how they are implemented in the server
- How to leverage logical sessions for retriable writes
- How to pull the new cluster-wide killOp emergency break
- CTO and lecturer who created Metarhia, an application server for Node.js that focuses on scalability, reliability, and clean architecture principles.
- Metarhia includes packages for SQL, logging, configuration, schemas, and more that work together to provide an isolated and scalable backend.
- It emphasizes simplicity, avoiding middleware and global dependencies, with features like live reloading, graceful shutdown, and automatic dependency injection.
Running a MongoDB cluster is usually smooth sailing, but as your load increases you may notice things start to slow down. This talk will run through a few of the options you have to notice problems, and the ways to fix them. We’ll be focussing mainly on running a cluster inside EC2, as the challenges are slightly different, but you should learn something regardless of where you’re hosted.
This document discusses tools and techniques for diagnosing and debugging MongoDB deployments, drawing parallels to Sherlock Holmes' methods of investigation. It provides an overview of OS-level and MongoDB-specific tools for gathering data from logs and systems, including mtools for analyzing MongoDB logs. Examples demonstrate using mloginfo to extract query statistics and mplotqueries to visualize query patterns and collections scanned over time. The document advocates applying Holmes' principles of eliminating factors, balancing probabilities, and using imagination to scientifically analyze data and reveal the truth.
This document summarizes Asya Kamsky's presentation on diagnostics and debugging tools for MongoDB. It discusses tools like mongostat, mongotop, db.currentOp(), and MongoDB Management Service for monitoring databases. It also describes the mtools library for analyzing MongoDB logs, including mloginfo to get log metadata, mlogfilter to filter logs, and mplotqueries to visualize query data from logs in different plot types. The presentation uses quotes from Sherlock Holmes stories as an analogy to discuss gathering and analyzing data from MongoDB logs.
DTrace is a comprehensive dynamic tracing framework created by Sun Microsystems for troubleshooting kernel and application problems on production systems in real time. It provides probes in the operating system and applications to monitor events, collects and aggregates data, and provides tools to analyze the data. DTrace can be used on Unix-like systems like Solaris, Linux, macOS, and in Node.js applications through a DTrace provider. It allows gathering insights about the system and application behavior without restarting or slowing the system.
Intravert Server side processing for CassandraEdward Capriolo
The document provides examples of using CQL (Cassandra Query Language) to create and query tables in Cassandra. It shows how to create tables to store user and video data, insert sample records, and perform queries. It then discusses using the IntraVert library to execute more complex queries directly against Cassandra, such as joins, filters, and multi-table operations, in order to reduce network traffic and processing compared to doing everything on the client side.
The CAP theorem is widely known for distributed systems, but it's not the only tradeoff you should be aware of. For datastores, there is also the FAB theory and just like with the CAP theorem you can only pick two:
Fast: Results are real-time or near real-time instead of batch-oriented.
Accurate: Answers are exact and don't have a margin of error.
Big: You require horizontal scaling and need to distribute your data.
While Fast and Big are relatively easy to understand, Accurate is a bit harder to picture. This talk shows some concrete examples of accuracy tradeoffs Elasticsearch can take for terms aggregations, cardinality aggregations with HyperLogLog++, and the IDF part of the full-text search. Or how to trade some speed or the distribution for more accuracy.
It's 10pm: Do You Know Where Your Writes Are?MongoDB
Speaker: Samantha Ritter, Software Engineer, MongoDB
Level: 200 (Intermediate)
Track: How We Build MongoDB
MongoDB 3.6 delivers three new features to help you develop resilient applications: retriable writes, a cluster-wide killOp command, and zombie cursor cleanup. These features share a common base, an idea called a logical session. This new cluster-wide concept of user state is the quiet magic that allows you to know, with certainty, the status of your operations. MongoDB engineer Samantha Ritter will describe the above features in-depth, discuss when and how logical sessions can be used by applications and administrators, and show you how we implemented sessions for large, distributed systems.
What You Will Learn:
- What logical sessions are and how they are implemented in the server
- How to leverage logical sessions for retriable writes
- How to pull the new cluster-wide killOp emergency break
- CTO and lecturer who created Metarhia, an application server for Node.js that focuses on scalability, reliability, and clean architecture principles.
- Metarhia includes packages for SQL, logging, configuration, schemas, and more that work together to provide an isolated and scalable backend.
- It emphasizes simplicity, avoiding middleware and global dependencies, with features like live reloading, graceful shutdown, and automatic dependency injection.
Running a MongoDB cluster is usually smooth sailing, but as your load increases you may notice things start to slow down. This talk will run through a few of the options you have to notice problems, and the ways to fix them. We’ll be focussing mainly on running a cluster inside EC2, as the challenges are slightly different, but you should learn something regardless of where you’re hosted.
This document discusses tools and techniques for diagnosing and debugging MongoDB deployments, drawing parallels to Sherlock Holmes' methods of investigation. It provides an overview of OS-level and MongoDB-specific tools for gathering data from logs and systems, including mtools for analyzing MongoDB logs. Examples demonstrate using mloginfo to extract query statistics and mplotqueries to visualize query patterns and collections scanned over time. The document advocates applying Holmes' principles of eliminating factors, balancing probabilities, and using imagination to scientifically analyze data and reveal the truth.
This document summarizes Asya Kamsky's presentation on diagnostics and debugging tools for MongoDB. It discusses tools like mongostat, mongotop, db.currentOp(), and MongoDB Management Service for monitoring databases. It also describes the mtools library for analyzing MongoDB logs, including mloginfo to get log metadata, mlogfilter to filter logs, and mplotqueries to visualize query data from logs in different plot types. The presentation uses quotes from Sherlock Holmes stories as an analogy to discuss gathering and analyzing data from MongoDB logs.
DTrace is a comprehensive dynamic tracing framework created by Sun Microsystems for troubleshooting kernel and application problems on production systems in real time. It provides probes in the operating system and applications to monitor events, collects and aggregates data, and provides tools to analyze the data. DTrace can be used on Unix-like systems like Solaris, Linux, macOS, and in Node.js applications through a DTrace provider. It allows gathering insights about the system and application behavior without restarting or slowing the system.
Intravert Server side processing for CassandraEdward Capriolo
The document provides examples of using CQL (Cassandra Query Language) to create and query tables in Cassandra. It shows how to create tables to store user and video data, insert sample records, and perform queries. It then discusses using the IntraVert library to execute more complex queries directly against Cassandra, such as joins, filters, and multi-table operations, in order to reduce network traffic and processing compared to doing everything on the client side.
The CAP theorem is widely known for distributed systems, but it's not the only tradeoff you should be aware of. For datastores, there is also the FAB theory and just like with the CAP theorem you can only pick two:
Fast: Results are real-time or near real-time instead of batch-oriented.
Accurate: Answers are exact and don't have a margin of error.
Big: You require horizontal scaling and need to distribute your data.
While Fast and Big are relatively easy to understand, Accurate is a bit harder to picture. This talk shows some concrete examples of accuracy tradeoffs Elasticsearch can take for terms aggregations, cardinality aggregations with HyperLogLog++, and the IDF part of the full-text search. Or how to trade some speed or the distribution for more accuracy.
MongoDB Europe 2016 - Enabling the Internet of Things at Proximus - Belgium's...MongoDB
Proximus is one of the biggest Telecom companies in the Belgian market. This year the company began developing a new IoT network using LoRaWan technology. The talk will detail our development team’s search for a database suited to meet the needs of our IoT project, the selection and implementation of MongoDB as a database, as well as well as how we built a system for storing a variety of sensor data with high throughput by leveraging sleepy.mongoose. The talk will also discuss how different decisions around data storage impact applications in regards to both performance and total cost.
The document contains source code for a client-server application written in Java. The client code establishes a socket connection to the server, allows sending and receiving messages, and closes the connection. The server code starts by binding to a port, accepts new connections from clients, and spawns a new thread to handle each client connection concurrently. It reads and writes data from the socket and closes the connection when the client disconnects. The code includes classes for the client, server, and thread handling each client connection.
The code examples show source code for a client and server application for a chat program. The client code defines functions for connecting to the server, sending and receiving messages. The server code defines functions for starting the server, accepting new connections from clients, and handling message receives and sends between connected clients. The code implements multi-threaded processing to concurrently handle multiple client connections to the server.
The document contains source code for a client-server chat application written in Java. The client code establishes a socket connection to the server, reads user input and sends messages to the server. The server code initializes a server socket to listen for client connections, spawns a new thread for each client, reads incoming messages and sends responses. The code includes graphical user interface components for selecting the client or server role, composing and displaying messages.
This document describes the implementation of a simple REST server in Qt using reflection. It discusses how the Qt meta-object compiler (moc) is used, the abstract and concrete server classes, building the route tree using reflection, handling new connections in worker threads, calling methods based on the request, and using reflection for testing. The abstract server class inherits from QTcpServer and uses slots decorated with tags to implement routes. Worker threads handle individual connections and parse requests to call the appropriate method. Reflection is leveraged throughout to build routes and dispatch requests without explicit registration or mapping.
Detection of REST Patterns and Antipatterns: A Heuristics-based ApproachFrancis Palma
The document describes two responses from the Dropbox API. The first response includes metadata about a folder and its contents. The second response includes information about a specific user account. Both responses lack hyperlinks that could be followed to other resources.
WebCamp: Developer Day: Web Security: Cookies, Domains and CORS - Юрий Чайков...GeeksLab Odessa
Web Security: Cookies, Domains and CORS
Юрий Чайковский
О предложенном еще в 1995 году и актуальным до сегодняшнего дня принципе одинакового источника (Same-origin policy) и о применении и ограничениях при междоменных запросах. Пример CSRF атак, а также правила конфигурации сервера для защиты от них. О последних нововведениях, касающихся контроля происхождения контента для предотвращения XSS атак. Кроме того:
- Принцип одинакового источника.
- Использование междоменных запросов.
- CSRF атаки (с демонстрацией).
- Классификация браузерных запросов.
- Ограничения междоменных запросов.
- Серверный контроль доступа.
- Особенности Internet Explorer 8, 9.
- Принцип безопасности контента (CSP).
Philipp Krenn | Make Your Data FABulous | Codemotion Madrid 2018Codemotion
The CAP theorem is widely known for distributed systems, but it's not the only tradeoff you should be aware of. For datastores there is also the FAB theory and just like with the CAP theorem you can only pick two: fast, accurate, big. While Fast and Big are relatively easy to understand, Accurate is a bit harder to picture. This talk shows some concrete examples of accuracy tradeoffs Elasticsearch can take for terms aggregations, cardinality aggregations with HyperLogLog++, and the IDF part of full-text search. Or how to trade some speed or the distribution for more accuracy.
1) Hatohol is a server that collects and merges data from Zabbix and Nagios servers. It has a web-based client for visualizing this data.
2) The Hatohol server architecture pulls data from Zabbix and Nagios using APIs and stores it in a unified database. The server also has a REST API for the client.
3) Future plans for Hatohol include adding an action framework to allow it to take actions based on triggers, improving high availability, adding graphing capabilities, and a more sophisticated web client.
The document discusses MongoDB performance tuning. It begins by distinguishing between optimizing, which involves restructuring applications and data, and performance tuning, which experiments with system modifications. It recommends investigating performance using log files, the profiler, and explain queries. Creating an index on first_name and last_name fields improved the performance of a query searching on first_name from 480ms to 7ms by enabling an index scan instead of a collection scan. The document suggests continuing performance monitoring and investigating other issues at future meetings.
Node.js is an asynchronous JavaScript runtime that allows for efficient handling of I/O operations. The presentation discusses developing with Node.js by using modules from NPM, debugging with node-inspector, common pitfalls like blocking loops, and best practices like avoiding large heaps and offloading intensive tasks. Key Node.js modules demonstrated include Express for web frameworks and Socket.io for real-time applications.
This document discusses tuning MongoDB performance. It covers tuning queries using the database profiler and explain commands to analyze slow queries. It also covers tuning system configurations like Linux settings, disk I/O, and memory to optimize MongoDB performance. Topics include setting ulimits, IO scheduler, filesystem options, and more. References to MongoDB and Linux tuning documentation are also provided.
MongoDB World 2016: Deciphering .explain() OutputMongoDB
The document discusses different explain modes for MongoDB queries and aggregations. It begins with an overview of explain() and query plans, then covers the default "queryPlanner" mode which shows the winning and rejected plans. It also mentions the "executionStats" and "allPlansExecution" modes which provide more runtime statistics. The document aims to help understand how queries and aggregations are executed and troubleshoot performance issues.
This document provides an overview of Cuckoo sandbox and tips for using and customizing it. It discusses supported platforms and hypervisors, how to retrieve analysis results using signatures, different ways to write hooks, and examples of analyzing malware like Andromeda and Locky. The document also shares some "goodies" like redirecting SMTP traffic and injecting emulator headers to trigger behaviors.
Webinar slides: How to Secure MongoDB with ClusterControlSeveralnines
Watch the slides of our webinar on “How to secure MongoDB with ClusterControl” and find out about the essential steps necessary to secure MongoDB and how to verify if your MongoDB instance is safe.
The recent MongoDB ransom hack caused a lot of damage and outages, while it could have been prevented with maybe two or three simple configuration changes. MongoDB offers a lot of security features out of the box, however it disables them by default.
In this webinar, we explain which configuration changes are necessary to enable MongoDB’s security features, and how to test if your setup is secure after enablement. We also demonstrate how ClusterControl enables security on default installations. And we cover how to leverage the ClusterControl advisors and the MongoDB Audit Log to constantly scan your environment, and harden your security even more.
AGENDA
What is the MongoDB ransom hack?
What other security threats are valid for MongoDB?
How to enable authentication / authorisation
How to secure MongoDB from ransomware
How to scan your system
ClusterControl MongoDB security advisors
Live Demo
SPEAKER
Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic MySQL and Database expert with over 15 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to Couchbase, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, FOSDEM) and related meetups.
Beyond PHP - it's not (just) about the codeWim Godden
Most PHP developers focus on writing code. But creating Web applications is about much more than just writing PHP. Take a step outside the PHP cocoon and into the big PHP ecosphere to find out how small code changes can make a world of difference on servers and network. This talk is an eye-opener for developers who spend over 80% of their time coding, debugging and testing.
The document provides best practices for handling performance issues in an Odoo deployment. It recommends gathering deployment information, such as hardware specs, number of machines, and integration with web services. It also suggests monitoring tools to analyze system performance and important log details like CPU time, memory limits, and request processing times. The document further discusses optimizing PostgreSQL settings, using tools like pg_activity, pg_stat_statements, and pgbadger to analyze database queries and performance. It emphasizes reproducing issues, profiling code with tools like the Odoo profiler, and fixing problems in an iterative process.
The document discusses visualizing metrics data from production services in real time. It recommends using the Metrics library to collect metrics on requests, memory usage, and other factors from services. The visualized data provides insights and safety by surfacing what is happening with services and resource usage. Real-time monitoring allows issues to be detected and addressed quickly.
Mastering Spring Boot's Actuator with Madhura BhaveVMware Tanzu
The document discusses Spring Boot Actuator, a module that allows monitoring and management of Spring Boot applications. It describes the various endpoints exposed by Actuator for tasks like health checks, metrics collection, and accessing bean configuration details. It also covers how to write custom endpoints and leverage existing endpoint functionality through extensions. The document provides examples of annotations used to build endpoints and operations along with HTTP request formats.
MongoDB Europe 2016 - Enabling the Internet of Things at Proximus - Belgium's...MongoDB
Proximus is one of the biggest Telecom companies in the Belgian market. This year the company began developing a new IoT network using LoRaWan technology. The talk will detail our development team’s search for a database suited to meet the needs of our IoT project, the selection and implementation of MongoDB as a database, as well as well as how we built a system for storing a variety of sensor data with high throughput by leveraging sleepy.mongoose. The talk will also discuss how different decisions around data storage impact applications in regards to both performance and total cost.
The document contains source code for a client-server application written in Java. The client code establishes a socket connection to the server, allows sending and receiving messages, and closes the connection. The server code starts by binding to a port, accepts new connections from clients, and spawns a new thread to handle each client connection concurrently. It reads and writes data from the socket and closes the connection when the client disconnects. The code includes classes for the client, server, and thread handling each client connection.
The code examples show source code for a client and server application for a chat program. The client code defines functions for connecting to the server, sending and receiving messages. The server code defines functions for starting the server, accepting new connections from clients, and handling message receives and sends between connected clients. The code implements multi-threaded processing to concurrently handle multiple client connections to the server.
The document contains source code for a client-server chat application written in Java. The client code establishes a socket connection to the server, reads user input and sends messages to the server. The server code initializes a server socket to listen for client connections, spawns a new thread for each client, reads incoming messages and sends responses. The code includes graphical user interface components for selecting the client or server role, composing and displaying messages.
This document describes the implementation of a simple REST server in Qt using reflection. It discusses how the Qt meta-object compiler (moc) is used, the abstract and concrete server classes, building the route tree using reflection, handling new connections in worker threads, calling methods based on the request, and using reflection for testing. The abstract server class inherits from QTcpServer and uses slots decorated with tags to implement routes. Worker threads handle individual connections and parse requests to call the appropriate method. Reflection is leveraged throughout to build routes and dispatch requests without explicit registration or mapping.
Detection of REST Patterns and Antipatterns: A Heuristics-based ApproachFrancis Palma
The document describes two responses from the Dropbox API. The first response includes metadata about a folder and its contents. The second response includes information about a specific user account. Both responses lack hyperlinks that could be followed to other resources.
WebCamp: Developer Day: Web Security: Cookies, Domains and CORS - Юрий Чайков...GeeksLab Odessa
Web Security: Cookies, Domains and CORS
Юрий Чайковский
О предложенном еще в 1995 году и актуальным до сегодняшнего дня принципе одинакового источника (Same-origin policy) и о применении и ограничениях при междоменных запросах. Пример CSRF атак, а также правила конфигурации сервера для защиты от них. О последних нововведениях, касающихся контроля происхождения контента для предотвращения XSS атак. Кроме того:
- Принцип одинакового источника.
- Использование междоменных запросов.
- CSRF атаки (с демонстрацией).
- Классификация браузерных запросов.
- Ограничения междоменных запросов.
- Серверный контроль доступа.
- Особенности Internet Explorer 8, 9.
- Принцип безопасности контента (CSP).
Philipp Krenn | Make Your Data FABulous | Codemotion Madrid 2018Codemotion
The CAP theorem is widely known for distributed systems, but it's not the only tradeoff you should be aware of. For datastores there is also the FAB theory and just like with the CAP theorem you can only pick two: fast, accurate, big. While Fast and Big are relatively easy to understand, Accurate is a bit harder to picture. This talk shows some concrete examples of accuracy tradeoffs Elasticsearch can take for terms aggregations, cardinality aggregations with HyperLogLog++, and the IDF part of full-text search. Or how to trade some speed or the distribution for more accuracy.
1) Hatohol is a server that collects and merges data from Zabbix and Nagios servers. It has a web-based client for visualizing this data.
2) The Hatohol server architecture pulls data from Zabbix and Nagios using APIs and stores it in a unified database. The server also has a REST API for the client.
3) Future plans for Hatohol include adding an action framework to allow it to take actions based on triggers, improving high availability, adding graphing capabilities, and a more sophisticated web client.
The document discusses MongoDB performance tuning. It begins by distinguishing between optimizing, which involves restructuring applications and data, and performance tuning, which experiments with system modifications. It recommends investigating performance using log files, the profiler, and explain queries. Creating an index on first_name and last_name fields improved the performance of a query searching on first_name from 480ms to 7ms by enabling an index scan instead of a collection scan. The document suggests continuing performance monitoring and investigating other issues at future meetings.
Node.js is an asynchronous JavaScript runtime that allows for efficient handling of I/O operations. The presentation discusses developing with Node.js by using modules from NPM, debugging with node-inspector, common pitfalls like blocking loops, and best practices like avoiding large heaps and offloading intensive tasks. Key Node.js modules demonstrated include Express for web frameworks and Socket.io for real-time applications.
This document discusses tuning MongoDB performance. It covers tuning queries using the database profiler and explain commands to analyze slow queries. It also covers tuning system configurations like Linux settings, disk I/O, and memory to optimize MongoDB performance. Topics include setting ulimits, IO scheduler, filesystem options, and more. References to MongoDB and Linux tuning documentation are also provided.
MongoDB World 2016: Deciphering .explain() OutputMongoDB
The document discusses different explain modes for MongoDB queries and aggregations. It begins with an overview of explain() and query plans, then covers the default "queryPlanner" mode which shows the winning and rejected plans. It also mentions the "executionStats" and "allPlansExecution" modes which provide more runtime statistics. The document aims to help understand how queries and aggregations are executed and troubleshoot performance issues.
This document provides an overview of Cuckoo sandbox and tips for using and customizing it. It discusses supported platforms and hypervisors, how to retrieve analysis results using signatures, different ways to write hooks, and examples of analyzing malware like Andromeda and Locky. The document also shares some "goodies" like redirecting SMTP traffic and injecting emulator headers to trigger behaviors.
Webinar slides: How to Secure MongoDB with ClusterControlSeveralnines
Watch the slides of our webinar on “How to secure MongoDB with ClusterControl” and find out about the essential steps necessary to secure MongoDB and how to verify if your MongoDB instance is safe.
The recent MongoDB ransom hack caused a lot of damage and outages, while it could have been prevented with maybe two or three simple configuration changes. MongoDB offers a lot of security features out of the box, however it disables them by default.
In this webinar, we explain which configuration changes are necessary to enable MongoDB’s security features, and how to test if your setup is secure after enablement. We also demonstrate how ClusterControl enables security on default installations. And we cover how to leverage the ClusterControl advisors and the MongoDB Audit Log to constantly scan your environment, and harden your security even more.
AGENDA
What is the MongoDB ransom hack?
What other security threats are valid for MongoDB?
How to enable authentication / authorisation
How to secure MongoDB from ransomware
How to scan your system
ClusterControl MongoDB security advisors
Live Demo
SPEAKER
Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic MySQL and Database expert with over 15 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to Couchbase, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, FOSDEM) and related meetups.
Beyond PHP - it's not (just) about the codeWim Godden
Most PHP developers focus on writing code. But creating Web applications is about much more than just writing PHP. Take a step outside the PHP cocoon and into the big PHP ecosphere to find out how small code changes can make a world of difference on servers and network. This talk is an eye-opener for developers who spend over 80% of their time coding, debugging and testing.
The document provides best practices for handling performance issues in an Odoo deployment. It recommends gathering deployment information, such as hardware specs, number of machines, and integration with web services. It also suggests monitoring tools to analyze system performance and important log details like CPU time, memory limits, and request processing times. The document further discusses optimizing PostgreSQL settings, using tools like pg_activity, pg_stat_statements, and pgbadger to analyze database queries and performance. It emphasizes reproducing issues, profiling code with tools like the Odoo profiler, and fixing problems in an iterative process.
The document discusses visualizing metrics data from production services in real time. It recommends using the Metrics library to collect metrics on requests, memory usage, and other factors from services. The visualized data provides insights and safety by surfacing what is happening with services and resource usage. Real-time monitoring allows issues to be detected and addressed quickly.
Mastering Spring Boot's Actuator with Madhura BhaveVMware Tanzu
The document discusses Spring Boot Actuator, a module that allows monitoring and management of Spring Boot applications. It describes the various endpoints exposed by Actuator for tasks like health checks, metrics collection, and accessing bean configuration details. It also covers how to write custom endpoints and leverage existing endpoint functionality through extensions. The document provides examples of annotations used to build endpoints and operations along with HTTP request formats.
This talk was prepared for the November 2013 DataPhilly Meetup: Data in Practice ( http://www.meetup.com/DataPhilly/events/149515412/ )
Map Reduce: Beyond Word Count by Jeff Patti
Have you ever wondered what map reduce can be used for beyond the word count example you see in all the introductory articles about map reduce? Using Python and mrjob, this talk will cover a few simple map reduce algorithms that in part power Monetate's information pipeline
Bio: Jeff Patti is a backend engineer at Monetate with a passion for algorithms, big data, and long walks on the beach. Prior to working at Monetate he performed software R&D for Lockheed Martin, where he worked on projects ranging from social network analysis to robotics.
This document discusses refactoring code to improve its design without changing external behavior. It notes that refactoring involves making small, incremental changes rather than large "big bang" refactorings. Code smells that may indicate a need for refactoring include duplication, long methods, complex conditional logic, speculative code, and overuse of comments. Techniques discussed include extracting methods, removing duplication, using meaningful names, removing temporary variables, and applying polymorphism. The document emphasizes that refactoring is an investment that makes future changes easier and helps avoid bugs, and encourages learning from other programming communities.
The document contains summaries of several C programming examples:
1. Programs to calculate the area and circumference of a circle, find simple interest, convert temperatures between Celsius and Fahrenheit, calculate subject marks and percentages, and calculate gross salary.
2. Additional programs demonstrate swapping values with and without a third variable, finding the greatest of three numbers, determining if a year is a leap year, and identifying integers as odd or even, positive or negative.
3. Further programs check if an integer is divisible by 5 and 11, compare two integers for equality, use a switch statement to print days of the week, and perform arithmetic operations using a switch case.
This document discusses caching templates in Template Toolkit to improve performance. It describes how to profile templates to identify optimization opportunities. Implementing caching reduced the total request processing time by 4.9 times and accelerated template rendering by 20 times. The Template::Context::Cacheable module provides caching capabilities and is available on GitHub.
Wszyscy zostaliśmy oszukani! Automatyczne zarządzanie pamięci rozwiąże wszystkie Wasze problemy, mówili. W zarządzanych środowiskach takich jak CLR JVM nie będzie wycieków pamięci, mówili! Właściwie pamięć jest tania i nie musisz się już nią nigdy więcej martwić. Wszyscy kłamali. Automatyczne zarządzanie pamięcią jest wygodną abstrakcją i bardzo często działa dobrze. Ale jak każda abstrakcja, wcześniej czy później "wycieka" ona. I to najczęściej w najmniej spodziewanym i przyjemnym momencie. W tej sesji spróbuję otworzyć oczy na fakt, że błoga nieświadomość nt. tej abstrakcji może być kosztowna. Pokażę jak może się objawić frywolne traktowanie pamięci i co możemy zyskać pisząc kod zdając sobie sprawę, że pamięć jednak nie jest nieskończona, tania i zawsze jednakowo szybka.
Splunk conf2014 - Lesser Known Commands in Splunk Search Processing Language ...Splunk
From one of the most active contributors to Splunk Answers and the IRC channel, this session covers those less popular but still super powerful commands, such as "map", "xyseries", "contingency" and others. This session also showcases tricks such as "eval host_{host} = Value" to dynamically create fields based on other field values, and searches that show concurrency based on start/end times within an event (using gentimes).
This document contains programs and algorithms for simulating different CPU scheduling algorithms like FCFS, SJF, Priority and Round Robin. It also contains a program for implementing the Producer-Consumer problem using semaphores and an algorithm for implementing optimal page replacement.
The document presents a scientific calculator project created by a team of 3 students - Mubassir Rahman Rounak, Meghla Jahan Urme, and Tasnim Saima Raita. The presentation contains sections on introduction, basic functions, system design, source code, code explanation, testing, and future scope. The introduction provides background on the scientific calculator created using JavaScript. The basic functions section describes mathematical operations like addition, subtraction, multiplication, division, square, square root, etc. The system design section discusses how flowcharts were used to design the system. The source code and code explanation sections provide details on the coding and programming concepts used.
Alex Woolford, Confluent, Senior Solutions Engineer
This slide deck was used during the following talk, non-linearly:
Real-time recommendations with Snowplow, Kafka, and Neo4j
Abstract: The value of the information gleaned from a web session often has a very short half-life. Post-hoc analysis is often too late to be actionable. In this short talk, we’ll show you how to generate customer profiles, and build a real-time recommendation engine with clickstream events sourced from Snowplow.
https://www.meetup.com/Saint-Louis-Kafka-meetup-group/events/275915770/
This document discusses AngularJS and single page applications (SPAs). It begins by defining what a SPA is - a client-side application that functions like a desktop application with rich and responsive functionality. Technically, SPAs use HTML5 and JavaScript with lightweight REST/JSON services and data binding. AngularJS is introduced as a complete framework for building rich SPAs that hides DOM manipulation and uses data binding instead of direct DOM changes. The document then covers key aspects of AngularJS including directives, views/controllers/scopes, modules/routes/services, and custom directives. It concludes by discussing some UI elements in AngularJS and notes that real-world SPAs often blur the lines between client and server.
The document summarizes an internship project using machine learning techniques like Newton's method and XGBoost to more accurately assess clients' insurance risks and determine the most suitable insurance plans. It involves creating a function that minimizes risk and maximizes expected returns using variables like age, medical history, and BMI from a dataset. Newton's method is then applied to the function to iteratively eliminate numerical errors and more accurately represent clients' risk scores, correctly assigning them to insurance plans. The results allow insurance companies to invest less in high-risk clients, lowering overall rates while still making accurate assessments.
Talk from "We Are Developers World Congress"
Session Info:
Introduction into the Open Source Framework "E.D.D.I", that has been developed for creating and maintaining multiple Chatbot-Products in a Cooperate Environment. This talk will cover the architecture, how it can be used and how it has been used, based on an example with the Norwegian Company "differ.chat".
Speaker Bio:
Gregor Jarisch
Chatbot development since 2006 (industries such as e-commerce, first-level-support, quality management, education).
Chatbot Lead @ DIFFER.CHAT
Dev Lead of Enterprise-Ready Open Source Chatbot Platform "E.D.D.I.".
Agile and Innovation Coaching
10+ years work experience in Software Development, in particular Web Services.
Events Processing and Data Analysis with Lucidworks Fusion: Presented by Kira...Lucidworks
The document discusses using signals and events collected from user interactions to power recommendations and analytics. It describes how signals are collected using Snowplow and stored in Solr. Signals can then be aggregated using Spark to generate recommendations by boosting related search results or constructing a co-occurrence graph. The demo shows how a recommendation API uses aggregated signals to modify search behavior based on a user's environment.
Password protected personal diary reportMoueed Ahmed
The document is a password protected personal diary program code written in C. It includes functions for adding, viewing, editing, and deleting records from the diary. The main function acts as the driver code and displays a menu for the user to select these options. Additional functions handle password validation, reading/writing data to binary files, and performing the necessary operations for each diary record option. Header files like stdio.h, string.h are included for input/output and string handling functionality.
This document discusses randomization using SystemVerilog. It begins by introducing constraint-driven test generation and random testing. It explains that SystemVerilog allows specifying constraints in a compact way to generate random values that meet the constraints. The document then discusses using objects to model complex data types for randomization. It provides examples of using SystemVerilog functions like $random, $urandom, and $urandom_range to generate random numbers. It also discusses constraining randomization using inline constraints and randomizing objects with the randomize method.
This document discusses using Python to connect to and interact with a PostgreSQL database. It covers:
- Popular Python database drivers for PostgreSQL, including Psycopg which is the most full-featured.
- The basics of connecting to a database, executing queries, and fetching results using the DB-API standard. This includes passing parameters, handling different data types, and error handling.
- Additional Psycopg features like server-side cursors, transaction handling, and custom connection factories to access columns by name rather than number.
In summary, it provides an overview of using Python with PostgreSQL for both basic and advanced database operations from the Python side.
This lab shows how to optimize Java code to improve the performance of Java ME applications. The document describes an app called OptimizeMe that simulates a simple game loop to test performance. The app measures how long it takes to complete each loop iteration and displays the frame time. The goal is to optimize the code in the "work" portion of the loop to reduce frame times.
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB
This presentation discusses migrating data from other data stores to MongoDB Atlas. It begins by explaining why MongoDB and Atlas are good choices for data management. Several preparation steps are covered, including sizing the target Atlas cluster, increasing the source oplog, and testing connectivity. Live migration, mongomirror, and dump/restore options are presented for migrating between replicasets or sharded clusters. Post-migration steps like monitoring and backups are also discussed. Finally, migrating from other data stores like AWS DocumentDB, Azure CosmosDB, DynamoDB, and relational databases are briefly covered.
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB
MongoDB Kubernetes operator and MongoDB Open Service Broker are ready for production operations. Learn about how MongoDB can be used with the most popular container orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications. A demo will show you how easy it is to enable MongoDB clusters as an External Service using the Open Service Broker API for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB
Are you new to schema design for MongoDB, or are you looking for a more complete or agile process than what you are following currently? In this talk, we will guide you through the phases of a flexible methodology that you can apply to projects ranging from small to large with very demanding requirements.
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB
Humana, like many companies, is tackling the challenge of creating real-time insights from data that is diverse and rapidly changing. This is our journey of how we used MongoDB to combined traditional batch approaches with streaming technologies to provide continues alerting capabilities from real-time data streams.
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB
Time series data is increasingly at the heart of modern applications - think IoT, stock trading, clickstreams, social media, and more. With the move from batch to real time systems, the efficient capture and analysis of time series data can enable organizations to better detect and respond to events ahead of their competitors or to improve operational efficiency to reduce cost and risk. Working with time series data is often different from regular application data, and there are best practices you should observe.
This talk covers:
Common components of an IoT solution
The challenges involved with managing time-series data in IoT applications
Different schema designs, and how these affect memory and disk utilization – two critical factors in application performance.
How to query, analyze and present IoT time-series data using MongoDB Compass and MongoDB Charts
At the end of the session, you will have a better understanding of key best practices in managing IoT time-series data with MongoDB.
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB
Our clients have unique use cases and data patterns that mandate the choice of a particular strategy. To implement these strategies, it is mandatory that we unlearn a lot of relational concepts while designing and rapidly developing efficient applications on NoSQL. In this session, we will talk about some of our client use cases, the strategies we have adopted, and the features of MongoDB that assisted in implementing these strategies.
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB
Encryption is not a new concept to MongoDB. Encryption may occur in-transit (with TLS) and at-rest (with the encrypted storage engine). But MongoDB 4.2 introduces support for Client Side Encryption, ensuring the most sensitive data is encrypted before ever leaving the client application. Even full access to your MongoDB servers is not enough to decrypt this data. And better yet, Client Side Encryption can be enabled at the "flick of a switch".
This session covers using Client Side Encryption in your applications. This includes the necessary setup, how to encrypt data without sacrificing queryability, and what trade-offs to expect.
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB
MongoDB Kubernetes operator is ready for prime-time. Learn about how MongoDB can be used with most popular orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications.
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB
When you need to model data, is your first instinct to start breaking it down into rows and columns? Mine used to be too. When you want to develop apps in a modern, agile way, NoSQL databases can be the best option. Come to this talk to learn how to take advantage of all that NoSQL databases have to offer and discover the benefits of changing your mindset from the legacy, tabular way of modeling data. We’ll compare and contrast the terms and concepts in SQL databases and MongoDB, explain the benefits of using MongoDB compared to SQL databases, and walk through data modeling basics so you feel confident as you begin using MongoDB.
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB
The document discusses guidelines for ordering fields in compound indexes to optimize query performance. It recommends the E-S-R approach: placing equality fields first, followed by sort fields, and range fields last. This allows indexes to leverage equality matches, provide non-blocking sorts, and minimize scanning. Examples show how indexes ordered by these guidelines can support queries more efficiently by narrowing the search bounds.
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB
Aggregation pipeline has been able to power your analysis of data since version 2.2. In 4.2 we added more power and now you can use it for more powerful queries, updates, and outputting your data to existing collections. Come hear how you can do everything with the pipeline, including single-view, ETL, data roll-ups and materialized views.
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB
The document describes a methodology for data modeling with MongoDB. It begins by recognizing the differences between document and tabular databases, then outlines a three step methodology: 1) describe the workload by listing queries, 2) identify and model relationships between entities, and 3) apply relevant patterns when modeling for MongoDB. The document uses examples around modeling a coffee shop franchise to illustrate modeling approaches and techniques.
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB
Virtual assistants are becoming the new norm when it comes to daily life, with Amazon’s Alexa being the leader in the space. As a developer, not only do you need to make web and mobile compliant applications, but you need to be able to support virtual assistants like Alexa. However, the process isn’t quite the same between the platforms.
How do you handle requests? Where do you store your data and work with it to create meaningful responses with little delay? How much of your code needs to change between platforms?
In this session we’ll see how to design and develop applications known as Skills for Amazon Alexa powered devices using the Go programming language and MongoDB.
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB
aux Core Data, appréciée par des centaines de milliers de développeurs. Apprenez ce qui rend Realm spécial et comment il peut être utilisé pour créer de meilleures applications plus rapidement.
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB
Il n’a jamais été aussi facile de commander en ligne et de se faire livrer en moins de 48h très souvent gratuitement. Cette simplicité d’usage cache un marché complexe de plus de 8000 milliards de $.
La data est bien connu du monde de la Supply Chain (itinéraires, informations sur les marchandises, douanes,…), mais la valeur de ces données opérationnelles reste peu exploitée. En alliant expertise métier et Data Science, Upply redéfinit les fondamentaux de la Supply Chain en proposant à chacun des acteurs de surmonter la volatilité et l’inefficacité du marché.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
19. MongoDB quick-look
MongoDB is a NoSQL database
Data is stored in documents
The schema can change! (even between documents)
PyMongo is the recommended Python driver
22. from pymongo import MongoClient
# SET UP THE CONNECTION
client = MongoClient("localhost", 27017)
db = client["aprender"]
mathcards = client["mathcards"]
users = client["users"]
collections
23. from pymongo import MongoClient
from secure import MONGO_USERNAME, MONGO_PASSWORD
# SET UP THE CONNECTION
client = MongoClient("localhost", 27017)
db = client["aprender"]
mathcards = client["mathcards"]
users = client["users"]
# AUTHENTICATE THE CONNECTION
client.aprender.authenticate(MONGO_USERNAME,
MONGO_PASSWORD,
mechanism='SCRAM-SHA-1')
51. Individual work time
Search for tasks in the .py file
Take a moment to write one or more pipeline stages
Check end of file comments if stuck
52. Multi-stage aggregation pipelines
task6: Response time by operand2 [2,3,4,5,6,9] for one user
task7: Percent accuracy (“score”) by operand2 for one user
task8: Retrieve, for one user, operand2 w/ lowest score
task9: Retrieve, for one user, operand2 w/ fastest time
task10: Retrieve operand2 which challenged the most users
74. Conclusion
PyMongo = easy to learn
You can learn PyMongo
The aggregation pipeline enables you to run data science code
efficiently on your database servers without needing to move any
data
77. Resources
Asya Kamsky's talk! 4:30PM WED. in Grand Ballroom
“Powerful Analysis with the Aggregation Pipeline”
MongoDB University (free!)
https://university.mongodb.com/
Aggregation Pipeline Quick Reference
https://docs.mongodb.com/manual/meta/aggregation-quick-
reference/
MongoDB Day-long conferences
From docs: “you can specify an _id value of null to calculate accumulated values for all the input documents as a whole"
From docs: “you can specify an _id value of null to calculate accumulated values for all the input documents as a whole"
“Deconstructs an array field from the input documents to output a document for each element. Each output document is the input document with the value of the array field replaced by the element.”