This talk will serve as a practical introduction to Distributed Tracing. We will see how we can make best use of open source distributed tracing platforms like Hypertrace with Azure and find the root cause of problems and predict issues in our critical business applications beforehand.
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB
Our clients have unique use cases and data patterns that mandate the choice of a particular strategy. To implement these strategies, it is mandatory that we unlearn a lot of relational concepts while designing and rapidly developing efficient applications on NoSQL. In this session, we will talk about some of our client use cases, the strategies we have adopted, and the features of MongoDB that assisted in implementing these strategies.
Glynn Bird - Building the "microservices way" involves breaking monolithic IT systems into small, decoupled services that each to one job well. This talk builds a practical microservices architecture during the talk using small Node.js apps that perform storage, analytics and visualisation tasks. Learn how to orchestrate your own microservice architecture using simple, easily-tested building blocks.
Ronan Corkery, kdb+ developer at Kx Systems: “Kdb+: How Wall Street Tech can ...Maya Lumbroso
Ronan Corkery, kdb+ developer at Kx Systems: “Kdb+: How Wall Street Tech can Speed up the World"
Bio:
Ronan Corkery is a kdb+ engineer who has been working with Kx and First Derivatives for the past 4 years. Currently based in Total Gas and Power he spent his first 2 year working with Morgan Stanley.
Abstract:
Ronan's presentation will focus on the vertical industries the formally only finance based technologies Kx offers has been moving into. He will present proven solutions as well as introducing the overall architecture that Kx uses as well as laying out potential opportunities to work with Kx.
Apache Druid: The Foundation of Fortune 500 “Analytical Decision-Making"Rommel Garcia
This document discusses Apache Druid, an open-source distributed real-time analytics database. It summarizes Druid's evolution, architecture, use cases, and how companies use it. The document outlines Druid's ability to handle large, high-dimensional datasets with sub-second queries and discusses its core components like segments for efficient storage and parallelism. It concludes by inviting the reader to join the Druid community.
This document discusses building a single database containing all web data by creating a scalable web crawler, data store, and data retrieval system. It describes the challenges of collecting and structuring data from millions of websites, building a NoSQL data store using Cassandra to handle terabytes of data, and providing an intuitive RESTful API for querying the unified database. The project aims to make web data easily accessible through a single source as if querying a database.
This talk will serve as a practical introduction to Distributed Tracing. We will see how we can make best use of open source distributed tracing platforms like Hypertrace with Azure and find the root cause of problems and predict issues in our critical business applications beforehand.
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB
Our clients have unique use cases and data patterns that mandate the choice of a particular strategy. To implement these strategies, it is mandatory that we unlearn a lot of relational concepts while designing and rapidly developing efficient applications on NoSQL. In this session, we will talk about some of our client use cases, the strategies we have adopted, and the features of MongoDB that assisted in implementing these strategies.
Glynn Bird - Building the "microservices way" involves breaking monolithic IT systems into small, decoupled services that each to one job well. This talk builds a practical microservices architecture during the talk using small Node.js apps that perform storage, analytics and visualisation tasks. Learn how to orchestrate your own microservice architecture using simple, easily-tested building blocks.
Ronan Corkery, kdb+ developer at Kx Systems: “Kdb+: How Wall Street Tech can ...Maya Lumbroso
Ronan Corkery, kdb+ developer at Kx Systems: “Kdb+: How Wall Street Tech can Speed up the World"
Bio:
Ronan Corkery is a kdb+ engineer who has been working with Kx and First Derivatives for the past 4 years. Currently based in Total Gas and Power he spent his first 2 year working with Morgan Stanley.
Abstract:
Ronan's presentation will focus on the vertical industries the formally only finance based technologies Kx offers has been moving into. He will present proven solutions as well as introducing the overall architecture that Kx uses as well as laying out potential opportunities to work with Kx.
Apache Druid: The Foundation of Fortune 500 “Analytical Decision-Making"Rommel Garcia
This document discusses Apache Druid, an open-source distributed real-time analytics database. It summarizes Druid's evolution, architecture, use cases, and how companies use it. The document outlines Druid's ability to handle large, high-dimensional datasets with sub-second queries and discusses its core components like segments for efficient storage and parallelism. It concludes by inviting the reader to join the Druid community.
This document discusses building a single database containing all web data by creating a scalable web crawler, data store, and data retrieval system. It describes the challenges of collecting and structuring data from millions of websites, building a NoSQL data store using Cassandra to handle terabytes of data, and providing an intuitive RESTful API for querying the unified database. The project aims to make web data easily accessible through a single source as if querying a database.
It’s All About The Cards: Sharing on Social Media Encouraged HTML Metadata G...Shawn Jones
In a perfect world, all articles consistently contain sufficient metadata to describe the resource. We know this is not the reality, so we are motivated to investigate the evolution of the metadata that is present when authors and publishers supply their own. Because applying metadata takes time, we recognize that each news article author has a limited metadata budget with which to spend their time and effort. How are they spending this budget? What are the top metadata categories in use? How did they grow over time? What purpose do they serve? We also recognize that not all metadata fields are used equally. What is the growth of individual fields over time? Which fields experienced the fastest adoption? In this paper, we review 227,726 HTML news articles from 29 outlets captured by the Internet Archive between 1998 and 2016. Upon reviewing the metadata fields in each article, we discovered that 2010 began a metadata renaissance as publishers embraced metadata for improved search engine ranking, search engine tracking, social media tracking, and social media sharing. When analyzing individual fields, we find that one application of metadata stands out above all others: social cards -- the cards generated by platforms like Twitter when one shares a URL. Once a metadata standard was established for cards in 2010, its fields were adopted by 20% of articles in the first year and reached more than 95% adoption by 2016. This rate of adoption surpasses efforts like schema.org and Dublin Core by a fair margin. When confronted with these results on how news publishers spend their metadata budget, we must conclude that it is all about the cards.
comparison of Excel add-ins and other solutions for implementing data mining or machine learning solutions on the Microsoft stack - includes coverage of XLMiner, Analysis Services Data Mining and PredixionSoftware
MongoDB .local Houston 2019: Building an IoT Streaming Analytics Platform to ...MongoDB
Corva's analytics platform enables real-time engineering and machine learning predictions and powers faster and safer drilling. The platform utilizes AWS serverless Lambda & extensible, data-driven API with MongoDB to handle 100,000+ requests per minute of streaming sensor data.
Build 2017 - P4010 - A lap around Azure HDInsight and Cosmos DB Open Source A...Windows Developer
Recently, we released the Spark Connector for our distributed NoSQL service – Azure Cosmos DB (formerly known as Azure DocumentDB). By connecting Apache Spark running on top Azure HDInsight to Azure Cosmos DB, you can accelerate your ability to solve fast-moving data science problems and machine learning. The Spark to Azure Cosmos DB connector efficiently exploits the native Cosmos DB managed indexes and enables updateable columns when performing analytics, push-down predicate filtering against fast-changing globally-distributed data, ranging from IoT, data science, and analytics scenarios. Come learn how you can perform blazing fast planet-scale data processing with Azure Cosmos DB and HDInsight.
Building an IoT Kafka Pipeline in Under 5 MinutesSingleStore
This document discusses building an IoT Kafka pipeline using MemSQL in under 5 minutes. It begins with an overview of IoT, Kafka, and operational data warehouses. It then discusses MemSQL and how it functions as an operational data warehouse by continuously loading and querying data in real-time. The document demonstrates launching a MemSQL cluster, creating schemas and pipelines to ingest, transform, persist and analyze IoT data from Kafka. It emphasizes MemSQL's ability to handle different data types and scales from IoT at high throughput with low latency.
Big Data with hadoop, Spark and BigQuery (Google cloud next Extended 2017 Kar...Imam Raza
Google Next Extended (https://cloudnext.withgoogle.com/) is an annual Google event focusing on Google cloud technologies. This presentation is from tech talk held in Google Next Extended 2017 Karachi event
Machine Learning on the Microsoft StackLynn Langit
This document provides an overview of machine learning solutions, including on-premise options using Excel add-ins, SQL Server, and R Studio, as well as cloud solutions on Azure and Predixion. It defines common machine learning roles and algorithms, discusses the R programming language, and compares features of the different solutions such as required infrastructure, complexity, costs, and capabilities.
Azure data lake for super developers radu vunvulea 2017 codecampRadu Vunvulea
Nowadays digital information is produced by each device that we touch. What we do after we receive this data can change the way how we do business, but to be able to store and process this data we need a powerful system like Azure Data Lake. This session we will discover the secrets that are behind Azure Data Lake and why this service should be on our roadmap.
How to get the best of both: MongoDB is great for low latency quick access of recent data; Treasure Data is great for infinitely growing store of historical data. In the latter case, one need not worry about scaling.
Streaming Data in the Cloud with Confluent and MongoDB Atlas | Robert Walters...HostedbyConfluent
This document discusses streaming data between Confluent Cloud and MongoDB Atlas. It provides an overview of MongoDB Atlas and its fully managed database capabilities in the cloud. It then demonstrates how to stream data from a Python generator application to MongoDB Atlas using Confluent Cloud and its connectors. The presentation concludes by providing a reference architecture for connecting Confluent Platform to MongoDB.
This document summarizes a system that processes billions of daily events using Apache Spark and Kafka. The system ingests 1.5-2 million events per minute from various sources, generates over 30 million events per minute, and stores over 5 TB of raw data daily. It uses Spark Streaming to process real-time data from Kafka, Spark jobs to handle raw data stored in Parquet format, and updates MySQL 1500 times per second with aggregated results. The system is run on Amazon EC2 infrastructure and stores data on S3 and Glacier.
Unlocking Value in Device Data Using Spark: Spark Summit East talk by John La...Spark Summit
HP ships millions of PCs, Printers, and other devices every year to customers in all market segments. More customers are seeking services provided with our products enabling new opportunities for HP to create services from the data we can collect from our devices. Every device we ship is an IoT endpoint with powerful CPU to capture rich data. Insights from this data are used internally to improve our products and focus on customer needs.
In this presentation, John will focus on HP’s journey to enabling Big Data analytics from within a large enterprise environment. He will review the challenges and how HP decided on AWS, Apache Spark and Databricks as the foundation for their entry into Big Data Analytics. John will also review how HP uses Spark to build analytic services from the data they generate from their devices.
This document discusses building robust data pipelines that stream events between applications and services in real time using MongoDB and Confluent. It outlines how event-driven architectures with Apache Kafka and MongoDB can help customers address challenges like reacting to new data sources in real time, modernizing applications, and gaining insights from data. Specific use cases are discussed like application modernization, microservices, analytics, and IoT. Customer examples are provided from healthcare, financial services, and other industries. The benefits of MongoDB's document data model and transactions are highlighted. Finally, the document demonstrates MongoDB Atlas and Confluent Platform capabilities.
It’s All About The Cards: Sharing on Social Media Encouraged HTML Metadata G...Shawn Jones
In a perfect world, all articles consistently contain sufficient metadata to describe the resource. We know this is not the reality, so we are motivated to investigate the evolution of the metadata that is present when authors and publishers supply their own. Because applying metadata takes time, we recognize that each news article author has a limited metadata budget with which to spend their time and effort. How are they spending this budget? What are the top metadata categories in use? How did they grow over time? What purpose do they serve? We also recognize that not all metadata fields are used equally. What is the growth of individual fields over time? Which fields experienced the fastest adoption? In this paper, we review 227,726 HTML news articles from 29 outlets captured by the Internet Archive between 1998 and 2016. Upon reviewing the metadata fields in each article, we discovered that 2010 began a metadata renaissance as publishers embraced metadata for improved search engine ranking, search engine tracking, social media tracking, and social media sharing. When analyzing individual fields, we find that one application of metadata stands out above all others: social cards -- the cards generated by platforms like Twitter when one shares a URL. Once a metadata standard was established for cards in 2010, its fields were adopted by 20% of articles in the first year and reached more than 95% adoption by 2016. This rate of adoption surpasses efforts like schema.org and Dublin Core by a fair margin. When confronted with these results on how news publishers spend their metadata budget, we must conclude that it is all about the cards.
comparison of Excel add-ins and other solutions for implementing data mining or machine learning solutions on the Microsoft stack - includes coverage of XLMiner, Analysis Services Data Mining and PredixionSoftware
MongoDB .local Houston 2019: Building an IoT Streaming Analytics Platform to ...MongoDB
Corva's analytics platform enables real-time engineering and machine learning predictions and powers faster and safer drilling. The platform utilizes AWS serverless Lambda & extensible, data-driven API with MongoDB to handle 100,000+ requests per minute of streaming sensor data.
Build 2017 - P4010 - A lap around Azure HDInsight and Cosmos DB Open Source A...Windows Developer
Recently, we released the Spark Connector for our distributed NoSQL service – Azure Cosmos DB (formerly known as Azure DocumentDB). By connecting Apache Spark running on top Azure HDInsight to Azure Cosmos DB, you can accelerate your ability to solve fast-moving data science problems and machine learning. The Spark to Azure Cosmos DB connector efficiently exploits the native Cosmos DB managed indexes and enables updateable columns when performing analytics, push-down predicate filtering against fast-changing globally-distributed data, ranging from IoT, data science, and analytics scenarios. Come learn how you can perform blazing fast planet-scale data processing with Azure Cosmos DB and HDInsight.
Building an IoT Kafka Pipeline in Under 5 MinutesSingleStore
This document discusses building an IoT Kafka pipeline using MemSQL in under 5 minutes. It begins with an overview of IoT, Kafka, and operational data warehouses. It then discusses MemSQL and how it functions as an operational data warehouse by continuously loading and querying data in real-time. The document demonstrates launching a MemSQL cluster, creating schemas and pipelines to ingest, transform, persist and analyze IoT data from Kafka. It emphasizes MemSQL's ability to handle different data types and scales from IoT at high throughput with low latency.
Big Data with hadoop, Spark and BigQuery (Google cloud next Extended 2017 Kar...Imam Raza
Google Next Extended (https://cloudnext.withgoogle.com/) is an annual Google event focusing on Google cloud technologies. This presentation is from tech talk held in Google Next Extended 2017 Karachi event
Machine Learning on the Microsoft StackLynn Langit
This document provides an overview of machine learning solutions, including on-premise options using Excel add-ins, SQL Server, and R Studio, as well as cloud solutions on Azure and Predixion. It defines common machine learning roles and algorithms, discusses the R programming language, and compares features of the different solutions such as required infrastructure, complexity, costs, and capabilities.
Azure data lake for super developers radu vunvulea 2017 codecampRadu Vunvulea
Nowadays digital information is produced by each device that we touch. What we do after we receive this data can change the way how we do business, but to be able to store and process this data we need a powerful system like Azure Data Lake. This session we will discover the secrets that are behind Azure Data Lake and why this service should be on our roadmap.
How to get the best of both: MongoDB is great for low latency quick access of recent data; Treasure Data is great for infinitely growing store of historical data. In the latter case, one need not worry about scaling.
Streaming Data in the Cloud with Confluent and MongoDB Atlas | Robert Walters...HostedbyConfluent
This document discusses streaming data between Confluent Cloud and MongoDB Atlas. It provides an overview of MongoDB Atlas and its fully managed database capabilities in the cloud. It then demonstrates how to stream data from a Python generator application to MongoDB Atlas using Confluent Cloud and its connectors. The presentation concludes by providing a reference architecture for connecting Confluent Platform to MongoDB.
This document summarizes a system that processes billions of daily events using Apache Spark and Kafka. The system ingests 1.5-2 million events per minute from various sources, generates over 30 million events per minute, and stores over 5 TB of raw data daily. It uses Spark Streaming to process real-time data from Kafka, Spark jobs to handle raw data stored in Parquet format, and updates MySQL 1500 times per second with aggregated results. The system is run on Amazon EC2 infrastructure and stores data on S3 and Glacier.
Unlocking Value in Device Data Using Spark: Spark Summit East talk by John La...Spark Summit
HP ships millions of PCs, Printers, and other devices every year to customers in all market segments. More customers are seeking services provided with our products enabling new opportunities for HP to create services from the data we can collect from our devices. Every device we ship is an IoT endpoint with powerful CPU to capture rich data. Insights from this data are used internally to improve our products and focus on customer needs.
In this presentation, John will focus on HP’s journey to enabling Big Data analytics from within a large enterprise environment. He will review the challenges and how HP decided on AWS, Apache Spark and Databricks as the foundation for their entry into Big Data Analytics. John will also review how HP uses Spark to build analytic services from the data they generate from their devices.
This document discusses building robust data pipelines that stream events between applications and services in real time using MongoDB and Confluent. It outlines how event-driven architectures with Apache Kafka and MongoDB can help customers address challenges like reacting to new data sources in real time, modernizing applications, and gaining insights from data. Specific use cases are discussed like application modernization, microservices, analytics, and IoT. Customer examples are provided from healthcare, financial services, and other industries. The benefits of MongoDB's document data model and transactions are highlighted. Finally, the document demonstrates MongoDB Atlas and Confluent Platform capabilities.
Revolutionizing the customer experience - Hello Engagement DatabaseDipti Borkar
This document discusses Couchbase's engagement database platform and its key features. It summarizes Couchbase as having a memory-first architecture that provides fast access to data and indexes in memory. It also supports features like flexible querying with N1QL, global indexing, full text search, mobile sync capabilities, and real-time analytics. The document outlines Couchbase's core design principles and how it provides a unified platform for key-value, query, search and analytics workloads across public and private clouds.
Joseph keynote @ Microsoft Data Amp, April 2017SeokJin Han
Joseph Sirosh, Corporate Vice President for the Data Group at Microsoft, give his keynote at Microsoft Data Amp 2017. Watch the video at: https://www.microsoft.com/en-us/sql-server/data-amp#
The document provides an overview of the Databricks platform, which offers a unified environment for data engineering, analytics, and AI. It describes how Databricks addresses the complexity of managing data across siloed systems by providing a single "data lakehouse" platform where all data and analytics workloads can be run. Key features highlighted include Delta Lake for ACID transactions on data lakes, auto loader for streaming data ingestion, notebooks for interactive coding, and governance tools to securely share and catalog data and models.
This document provides an outline for a talk on cloud computing. It begins with an introduction to cloud concepts and technologies like virtualization and parallel computing models. It then discusses different cloud models including IaaS, PaaS and SaaS. The outline includes demonstrations of cloud capabilities with Amazon AWS and Microsoft Azure, as well as data and computing models using MapReduce. It concludes with a case study of a real business application of the cloud and a question and answer section.
SQL Server 2017 Overview and Partner OpportunitiesTravis Wright
SQL Server 2017 is going to be released later this year. In this session will cover what to expect and how partners can deliver additional value to SQL Server customers.
This document provides an overview of an AWS event. It includes details about the AWS business including $16B in annual revenue and over 135,000 active customers. It discusses the breadth of AWS services and tools available, positioning AWS as a leader in cloud infrastructure. The document outlines how AWS gives customers superpowers with super sonic speed and pace of innovation. It provides examples of how customers are using AWS services to transform their businesses.
Customer migration to Azure SQL database, December 2019George Walters
This is a real life story on how a software as a service application moved to the cloud, to azure, over a period of two years. We discuss migration, business drivers, technology, and how it got done. We talk through more modern ways to refactor or change code to get into the cloud nowadays.
Alluxio provides a virtual unified file system that allows for unified access and accelerated performance of data across multiple storage systems and tiers. It addresses challenges of separating compute and storage in modern data architectures by providing a global namespace, server-side API translation between storage systems, and intelligent multi-tiering of data across RAM, SSDs and HDDs. Alluxio has been deployed in over 100 production environments across financial services, retail, telecom and other industries to accelerate analytics, machine learning and other workloads.
This document discusses the challenges of modern apps and how Microsoft's Azure cloud services provide solutions. It focuses on Azure Cosmos DB, a globally distributed database service that can scale massive amounts of data across any workload. Cosmos DB provides elastic scaling, guaranteed low latency, comprehensive security and compliance, and helps companies optimize operations and gain insights from IoT and big data.
Big Data LDN 2018: FORTUNE 100 LESSONS ON ARCHITECTING DATA LAKES FOR REAL-TI...Matt Stubbs
Date: 13th November 2018
Location: Fast Data Theatre
Time: 11:10 - 11:40
Speaker: Sunil MIstry
Organisation: Attunity
About: How do you maximise the value from your operational data? There is a growing need to process and analyse data in motion, as your business looks to generate additional value from multiple data sources. Analysis of real-time data streams can deliver competitive business advantage, reduces costs and create new revenue streams.
Come learn how Attunity, through CDC technology helps organisation on this journey from a batch orientated world to the modern streaming architecture on premise and in the cloud. Learn how to bring your most valuable data from Relational OLTP, Mainframes and SAP into this modern data architecture.
This document discusses Dell's solutions for big data and analytics workloads. It describes Dell's portfolio for unstructured analytics including storage, servers, and reference architectures. It also outlines Dell's vision for a unified streaming and batch analytics platform called Project Nautilus that would integrate Isilon storage with real-time stream processing.
This document discusses NoSQL databases and compares them to relational databases. It provides information on different types of NoSQL databases, including key-value stores, document databases, wide-column stores, and graph databases. The document outlines some use cases for each type and discusses concepts like eventual consistency, CAP theorem, and polyglot persistence. It also covers database architectures like replication and sharding that provide high availability and scalability.
Moving Beyond Cache by Yiftach Shoolman - Redis Day Bangalore 2020Redis Labs
This document summarizes a presentation about Redis and Redis Enterprise. It discusses how Redis can be used as a database due to its fast performance, modern data types, and ability to share data across deployments. It highlights Redis modules like RediSearch for full-text search and RedisGraph for graph databases. Redis Enterprise provides additional features like high availability, geo-distribution, durability, security and multi-tenancy. It also discusses how Redis Enterprise enables active-active deployments across regions with CRDTs for strong consistency and high performance.
Data Driven Advanced Analytics using Denodo Platform on AWSDenodo
The document discusses challenges with data-driven cloud modernization and how the Denodo platform can help address them. It outlines Denodo's capabilities like universal connectivity, data services APIs, security and governance features. Example use cases are presented around real-time analytics, centralized access control and transitioning to the cloud. Key benefits of the Denodo data virtualization approach are that it provides a logical view of data across sources and enables self-service analytics while reducing costs and IT dependencies.
High-performance database technology for rock-solid IoT solutionsClusterpoint
Clusterpoint is a privately held database software company founded in 2006 with 32 employees. Their product is a hybrid operational database, analytics, and search platform that provides secure, high-performance distributed data management at scale. It reduces total cost of ownership by 80% over traditional relational databases by providing blazing fast performance, unlimited scalability, and bulletproof transactions with instant text search and security. Clusterpoint also offers their database software as a cloud database as a service to instantly scale databases on demand.
The document discusses Microsoft's data platform and cloud services. It highlights:
1) Microsoft's data platform provides intelligence over all data with SQL and Apache Spark, enabling AI and machine learning over any data.
2) Microsoft offers data modernization solutions for migrating to the cloud or managing data on-premises and in hybrid environments.
3) Migrating databases to Azure provides cost savings, security, high performance, and intelligent capabilities through services like Azure SQL Database and Azure Cosmos DB.
Similar to Turnkey Multi-Region, Active-Active Session Stores with Steeltoe, Redis Enterprise, and PAS (20)
What AI Means For Your Product Strategy And What To Do About ItVMware Tanzu
The document summarizes Matthew Quinn's presentation on "What AI Means For Your Product Strategy And What To Do About It" at Denver Startup Week 2023. The presentation discusses how generative AI could impact product strategies by potentially solving problems companies have ignored or allowing competitors to create new solutions. Quinn advises product teams to evaluate their strategies and roadmaps, ensure they understand user needs, and consider how AI may change the problems being addressed. He provides examples of how AI could influence product development for apps in home organization and solar sales. Quinn concludes by urging attendees not to ignore AI's potential impacts and to have hard conversations about emerging threats and opportunities.
Make the Right Thing the Obvious Thing at Cardinal Health 2023VMware Tanzu
This document discusses the evolution of internal developer platforms and defines what they are. It provides a timeline of how technologies like infrastructure as a service, public clouds, containers and Kubernetes have shaped developer platforms. The key aspects of an internal developer platform are described as providing application-centric abstractions, service level agreements, automated processes from code to production, consolidated monitoring and feedback. The document advocates that internal platforms should make the right choices obvious and easy for developers. It also introduces Backstage as an open source solution for building internal developer portals.
Enhancing DevEx and Simplifying Operations at ScaleVMware Tanzu
Cardinal Health introduced Tanzu Application Service in 2016 and set up foundations for cloud native applications in AWS and later migrated to GCP in 2018. TAS has provided Cardinal Health with benefits like faster development of applications, zero downtime for critical applications, hosting over 5,000 application instances, quicker patching for security vulnerabilities, and savings through reduced lead times and staffing needs.
Dan Vega discussed upcoming changes and improvements in Spring including Spring Boot 3, which will have support for JDK 17, Jakarta EE 9/10, ahead-of-time compilation, improved observability with Micrometer, and Project Loom's virtual threads. Spring Boot 3.1 additions were also highlighted such as Docker Compose integration and Spring Authorization Server 1.0. Spring Boot 3.2 will focus on embracing virtual threads from Project Loom to improve scalability of web applications.
Platforms, Platform Engineering, & Platform as a ProductVMware Tanzu
This document discusses building platforms as products and reducing developer toil. It notes that platform engineering now encompasses PaaS and developer tools. A quote from Mercedes-Benz emphasizes building platforms for developers, not for the company itself. The document contrasts reactive, ticket-driven approaches with automated, self-service platforms and products. It discusses moving from considering platforms as a cost center to experts that drive business results. Finally, it provides questions to identify sources of developer toil, such as issues with workstation setup, running software locally, integration testing, committing changes, and release processes.
This document provides an overview of building cloud-ready applications in .NET. It defines what makes an application cloud-ready, discusses common issues with legacy applications, and recommends design patterns and practices to address these issues, including loose coupling, high cohesion, messaging, service discovery, API gateways, and resiliency policies. It includes code examples and links to additional resources.
Dan Vega discussed new features and capabilities in Spring Boot 3 and beyond, including support for JDK 17, Jakarta EE 9, ahead-of-time compilation, observability with Micrometer, Docker Compose integration, and initial support for Project Loom's virtual threads in Spring Boot 3.2 to improve scalability. He provided an overview of each new feature and explained how they can help Spring applications.
Spring Cloud Gateway - SpringOne Tour 2023 Charles Schwab.pdfVMware Tanzu
Spring Cloud Gateway is a gateway that provides routing, security, monitoring, and resiliency capabilities for microservices. It acts as an API gateway and sits in front of microservices, routing requests to the appropriate microservice. The gateway uses predicates and filters to route requests and modify requests and responses. It is lightweight and built on reactive principles to enable it to scale to thousands of routes.
This document appears to be from a VMware Tanzu Developer Connect presentation. It discusses Tanzu Application Platform (TAP), which provides a developer experience on Kubernetes across multiple clouds. TAP aims to unlock developer productivity, build rapid paths to production, and coordinate the work of development, security and operations teams. It offers features like pre-configured templates, integrated developer tools, centralized visibility and workload status, role-based access control, automated pipelines and built-in security. The presentation provides examples of how these capabilities improve experiences for developers, operations teams and security teams.
The document provides information about a Tanzu Developer Connect Workshop on Tanzu Application Platform. The agenda includes welcome and introductions on Tanzu Application Platform, followed by interactive hands-on workshops on the developer experience and operator experience. It will conclude with a quiz, prizes and giveaways. The document discusses challenges with developing on Kubernetes and how Tanzu Application Platform aims to improve the developer experience with features like pre-configured templates, developer tools integration, rapid iteration and centralized management.
The Tanzu Developer Connect is a hands-on workshop that dives deep into TAP. Attendees receive a hands on experience. This is a great program to leverage accounts with current TAP opportunities.
The Tanzu Developer Connect is a hands-on workshop that dives deep into TAP. Attendees receive a hands on experience. This is a great program to leverage accounts with current TAP opportunities.
Simplify and Scale Enterprise Apps in the Cloud | Dallas 2023VMware Tanzu
This document discusses simplifying and scaling enterprise Spring applications in the cloud. It provides an overview of Azure Spring Apps, which is a fully managed platform for running Spring applications on Azure. Azure Spring Apps handles infrastructure management and application lifecycle management, allowing developers to focus on code. It is jointly built, operated, and supported by Microsoft and VMware. The document demonstrates how to create an Azure Spring Apps service, create an application, and deploy code to the application using three simple commands. It also discusses features of Azure Spring Apps Enterprise, which includes additional capabilities from VMware Tanzu components.
SpringOne Tour: Deliver 15-Factor Applications on Kubernetes with Spring BootVMware Tanzu
The document discusses 15 factors for building cloud native applications with Kubernetes based on the 12 factor app methodology. It covers factors such as treating code as immutable, externalizing configuration, building stateless and disposable processes, implementing authentication and authorization securely, and monitoring applications like space probes. The presentation aims to provide an overview of the 15 factors and demonstrate how to build cloud native applications using Kubernetes based on these principles.
SpringOne Tour: The Influential Software EngineerVMware Tanzu
The document discusses the importance of culture in software projects and how to influence culture. It notes that software projects involve people and personalities, not just technology. It emphasizes that culture informs everything a company does and is very difficult to change. It provides advice on being aware of your company's culture, finding ways to inculcate good cultural values like writing high-quality code, and approaches for influencing decision makers to prioritize culture.
SpringOne Tour: Domain-Driven Design: Theory vs PracticeVMware Tanzu
This document discusses domain-driven design, clean architecture, bounded contexts, and various modeling concepts. It provides examples of an e-scooter reservation system to illustrate domain modeling techniques. Key topics covered include identifying aggregates, bounded contexts, ensuring single sources of truth, avoiding anemic domain models, and focusing on observable domain behaviors rather than implementation details.
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
5. Real Time
Analytics
User Session
Store
Real Time Data
Ingest
High Speed
Transactions
Job & Queue
Management
Time Series Data Complex
Statistical Analysis
Notifications Distributed Lock Content Caching
Geospatial Data Streaming Data Machine Learning Search
Uniquely Suited to Modern Use Cases
A full range of capabilities that simplify and accelerate next generation applications
6. Data Structures - Redis’ Building Blocks
6
Lists
[ A → B → C → D → E ]
Hashes
{ A: “foo”, B: “bar”, C: “baz” }
Bitmaps
0011010101100111001010
Strings
"I'm a Plain Text String!”
Bit field
{23334}{112345569}{766538}
Key
”Retrieve the e-mail address of the user with the highest
bid in an auction that started on July 24th at 11:00pm PST” ZREVRANGE 07242015_2300 0 0=
Streams
🡪{id1=time1.seq1(A:“xyz”, B:“cdf”),
d2=time2.seq2(D:“abc”, )}🡪
Hyperloglog
00110101 11001110
Sorted Sets
{ A: 0.1, B: 0.3, C: 100 }
Sets
{ A , B , C , D , E }
Geospatial Indexes
{ A: (51.5, 0.12), B: (32.1, 34.7) }
7. Multi-Model Functionality at Any Scale
• Dedicated engine for each data
model (vs. API only)
• Models engines can be
selectively loaded, according
to use case
• All model engines access the
same data, eliminating the
need for transferring data
between them
8. Redis Enterprise Technology
8
Redis Enterprise Node Redis Enterprise Cluster
• Shared nothing cluster architecture
• Fully compatible with open source
commands & data structures
9. High Performance
Read and write with low local
sub-millisecond latency
Guaranteed data consistency
CRDT based: The datatypes are conflict-free
by design. All databases eventually converge
automatically to the same state with strong
eventual consistency
Supports causal consistency executing read
and write operations in an order that reflects
causality
Simplifies the app design
Develop as if it’s a single app in a single geo,
we take care of all the rest
Redis Enterprise Delivers Strong Eventual Consistency
and Causal Consistency
9
10. Multi-Cloud and Hybrid Cloud-OnPrem Support
App
App
App
App
Active-Active or Active-Passive
12. CRDT Example: Counter
12
Applies Commutative and Associative Properties
K = 30 + 40 + 50 = 120
K = 40 + 30 + 50 = 120
K = 50 + 30 + 40 = 120
INCR K 50
INCR K 30INCR K 40
A
B
C
Time: t1
14. Causal Consistency
14
Based on a Directed Acyclic Graph
model.
DAGs can only be traversed in one
direction, no matter their shape.
15. Redis CRDTs : Per Key Causal Consistency
Hot
Replica A
Apple
Replica B
Replica C
Apple
Apple
Pie Hot
Pie Peach?
Source FIFO
15
Replica A
Apple
Replica B
Replica C
TastyHot
Apple Pie Hot
Pie
Causal consistency
Apple