Redis Developers Day 2014 - Redis Labs TalksRedis Labs
These are the slides that the Redis Labs team had used to accompany the session that we gave during the first ever Redis Developers Day on October 2nd, 2014, London. It includes some of the ideas we've come up with to tackle operational challenges in the hyper-dense, multi-tenants Redis deployments that our service - Redis Cloud - consists of.
HIgh Performance Redis- Tague Griffith, GoProRedis Labs
High Performance Redis looks at a wide range of techniques - from programming to system tuning - to deploy and maintain an extremely high performing Redis cluster. From the operational
perspective, the talk lays out multiple techniques for clustering (sharding) Redis systems and examines how the different
approaches impact performance time. The talk further examines system settings (Linux network parameters, Redis
system) and how they impact performance (both good and bad). Finally, for the developer, we look at how different ways of structuring data actually demonstrate different performance characteristics
Counting image views using redis clusterRedis Labs
Streaming Logs and Processing View Counts using Redis Cluster
Seandon Mooy
(Imgur)
When you browse through Imgur, you notice that each user's post includes the number of views for that particular post. Imgur processes over 3 billion views per month and powers our view count feature using Redis. In this talk, we cover our current architecture for streaming logs and processing view counts using Redis Cluster, as well as some of the alternatives we explored and why we chose Redis.
Redis Developers Day 2014 - Redis Labs TalksRedis Labs
These are the slides that the Redis Labs team had used to accompany the session that we gave during the first ever Redis Developers Day on October 2nd, 2014, London. It includes some of the ideas we've come up with to tackle operational challenges in the hyper-dense, multi-tenants Redis deployments that our service - Redis Cloud - consists of.
HIgh Performance Redis- Tague Griffith, GoProRedis Labs
High Performance Redis looks at a wide range of techniques - from programming to system tuning - to deploy and maintain an extremely high performing Redis cluster. From the operational
perspective, the talk lays out multiple techniques for clustering (sharding) Redis systems and examines how the different
approaches impact performance time. The talk further examines system settings (Linux network parameters, Redis
system) and how they impact performance (both good and bad). Finally, for the developer, we look at how different ways of structuring data actually demonstrate different performance characteristics
Counting image views using redis clusterRedis Labs
Streaming Logs and Processing View Counts using Redis Cluster
Seandon Mooy
(Imgur)
When you browse through Imgur, you notice that each user's post includes the number of views for that particular post. Imgur processes over 3 billion views per month and powers our view count feature using Redis. In this talk, we cover our current architecture for streaming logs and processing view counts using Redis Cluster, as well as some of the alternatives we explored and why we chose Redis.
HBaseCon 2012 | Base Metrics: What They Mean to You - ClouderaCloudera, Inc.
If you’re running an HBase cluster in production, you’ve probably noticed that HBase shares a number of useful metrics about everything from your block cache performance to your HDFS latencies over JMX (or Ganglia, or just a file). The problem is that it’s sometimes hard to know what these metrics mean to you and your users. Should you be worried if your memstore SizeMB is 1.5GB? What if your regionservers have a hundred stores each? This talk will explain how to understand and interpret the metrics HBase exports. Along the way we’ll cover some high-level background on HBase’s internals, and share some battle tested rules-of-thumb about how to interpret and react to metrics you might see.
hbaseconasia2017: Large scale data near-line loading method and architectureHBaseCon
Shuaifeng Zhou
When we do real-time data loading to HBase, we use put/putlist interface. After receiving put request, regionserver will write WAL, write data into memory store, flush memory store to disk-store, then compact files again and again. That precedure occupies too much resource and causing read/write performance decrease. To solve the problem, we provide a kind of near-line loading method and architecture, greatly increase the loading bandwidth, and decrease the influence to read operations.
hbaseconasia2017 hbasecon hbase https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
Redis Day Keynote Salvatore Sanfillipo Redis LabsRedis Labs
Redis' seventh birthday was recently celebrated with the community, several contributors and users. This is Salvatore's keynote as he kicked off Redis Day in Tel Aviv.
Deploying any software can be a challenge if you don't understand how resources are used or how to plan for the capacity of your systems. Whether you need to deploy or grow a single MongoDB instance, replica set, or tens of sharded clusters then you probably share the same challenges in trying to size that deployment.
This webinar will cover what resources MongoDB uses, and how to plan for their use in your deployment. Topics covered will include understanding how to model and plan capacity needs for new and growing deployments. The goal of this webinar will be to provide you with the tools needed to be successful in managing your MongoDB capacity planning tasks.
Troubleshooting Kafka's socket server: from incident to resolutionJoel Koshy
LinkedIn’s Kafka deployment is nearing 1300 brokers that move close to 1.3 trillion messages a day. While operating Kafka smoothly even at this scale is testament to both Kafka’s scalability and the operational expertise of LinkedIn SREs we occasionally run into some very interesting bugs at this scale. In this talk I will dive into a production issue that we recently encountered as an example of how even a subtle bug can suddenly manifest at scale and cause a near meltdown of the cluster. We will go over how we detected and responded to the situation, investigated it after the fact and summarize some lessons learned and best-practices from this incident.
In this talk we report on our experience with Redis-on-Flash (RoF)—a recently introduced product that uses SSDs as a RAM extension to dramatically increase the effective dataset capacity that can be stored on a single server. This talk provides the first in-depth RoF system performance characterization: we consider different use cases (varying both RAM-to-disk access ratio and object size), and compare SATA-based RoF, NVMe-based RoF, and all-RAM Redis deployments. We show that the superior performance of NVMe drives in terms of both latency and peak bandwidth makes them a particularly good fit for RoF use cases. Specifically, we show that backing RoF with NVMe drives can deliver more than 2 million operations per second with sub-millisecond latency on a single server.
As a company starts dealing with large amounts of data, operation engineers are challenged with managing the influx of information while ensuring the resilience of data. Hadoop HDFS, Mesos and Spark help reduce issues with a scheduler that allows data cluster resources to be shared. It provides a common ground where data scientists and engineers can meet, develop high performance data processing applications and deploy their own tools.
Introduction to Apache BookKeeper Distributed StorageStreamlio
A brief technical introduction to Apache BookKeeper, the scalable, fault-tolerant, and low-latency storage service optimized for real-time and streaming workloads.
October 2016 HUG: Pulsar, a highly scalable, low latency pub-sub messaging s...Yahoo Developer Network
Yahoo recently open-sourced Pulsar, a highly scalable, low latency pub-sub messaging system running on commodity hardware. It provides simple pub-sub messaging semantics over topics, guaranteed at-least-once delivery of messages, automatic cursor management for subscribers, and cross-datacenter replication. Pulsar is used across various Yahoo applications for large scale data pipelines. Learn more about Pulsar architecture and use-cases in this talk.
Speakers:
Matteo Merli from Pulsar team at Yahoo
Redis in a Multi Tenant Environment–High Availability, Monitoring & Much More! Redis Labs
Running any
application in a multi-tenant environment poses its challenges. This talk is focused around how we at Rackspace run Redis
in a multi-tenant environment, ensuring security, performance, fault tolerance and high availability. This talk will cover: an
architecture deep dive of multi tenant Redis on the cloud, management of sentinels, monitoring and operations of a large
scale Redis deployment,introducing new Redis versions,scaling, security, some lessons learnt. The target audience for this
talk is anyone who is interested in the deployment/operational aspect of running Redis. This is relevant not only for those
who want to run Redis themselves, but also interested in how a Redis provider might be doing it for them.
Apache HBase, Accelerated: In-Memory Flush and Compaction HBaseCon
Eshcar Hillel and Anastasia Braginsky (Yahoo!)
Real-time HBase application performance depends critically on the amount of I/O in the datapath. Here we’ll describe an optimization of HBase for high-churn applications that frequently insert/update/delete the same keys, such as for high-speed queuing and e-commerce.
Speakers: Nick Dimiduk (Hortonworks) and Nicolas Liochon (Scaled Risk)
HBase is an online database so response latency is critical. This talk will examine sources of latency in HBase, detailing steps along the read and write paths. We'll examine the entire request lifecycle, from client to server and back again. We'll also look at the different factors that impact latency, including GC, cache misses, and system failures. Finally, the talk will highlight some of the work done in 0.96+ to improve the reliability of HBase.
Perforce BTrees: The Arcane and the ProfanePerforce
"Get a tour of Perforce BTree history, its behaviors and configuration. Learn about performance alternatives, space management tools and future projects, too."
Work with hundred of hot terabytes in JVMsMalin Weiss
Third-party updates to the database can cause Hazelcast applications to work with data which is out-of-date.
By synchronizing with an underlying database using an SQL Reflector, the Hazelcast Maps will be “alive” and change whenever the underlying data changes. The solution can also automatically derive domain models directly from the database schemas, so that you can start using the solution very quickly and handle extreme volumes of data.
Java one2015 - Work With Hundreds of Hot Terabytes in JVMsSpeedment, Inc.
Presentation Summary: By leveraging on memory mapped files, the Chronicle Engine supports large maps that easily can exceed the size of your server’s RAM, thus allowing application developers to create huge JVM:s where data can be obtained quickly and with predictable latency. The Chronicle Engine can be synchronized with an underlying database using Speedment so that your in-memory maps will be “alive” and change whenever data changes in the underlying database. Speedment can also automatically derive domain models directly from the database so that you can start using the solution very quickly. Because the Java Maps are mapped onto files, the maps can also be shared instantly between several JVM:s and when you restart a JVM, it may start very quickly without having to reload data from the underlying database. The mapped files can be hundreds of terabytes which has been done in real world deployment cases.
HBaseCon 2012 | Base Metrics: What They Mean to You - ClouderaCloudera, Inc.
If you’re running an HBase cluster in production, you’ve probably noticed that HBase shares a number of useful metrics about everything from your block cache performance to your HDFS latencies over JMX (or Ganglia, or just a file). The problem is that it’s sometimes hard to know what these metrics mean to you and your users. Should you be worried if your memstore SizeMB is 1.5GB? What if your regionservers have a hundred stores each? This talk will explain how to understand and interpret the metrics HBase exports. Along the way we’ll cover some high-level background on HBase’s internals, and share some battle tested rules-of-thumb about how to interpret and react to metrics you might see.
hbaseconasia2017: Large scale data near-line loading method and architectureHBaseCon
Shuaifeng Zhou
When we do real-time data loading to HBase, we use put/putlist interface. After receiving put request, regionserver will write WAL, write data into memory store, flush memory store to disk-store, then compact files again and again. That precedure occupies too much resource and causing read/write performance decrease. To solve the problem, we provide a kind of near-line loading method and architecture, greatly increase the loading bandwidth, and decrease the influence to read operations.
hbaseconasia2017 hbasecon hbase https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
Redis Day Keynote Salvatore Sanfillipo Redis LabsRedis Labs
Redis' seventh birthday was recently celebrated with the community, several contributors and users. This is Salvatore's keynote as he kicked off Redis Day in Tel Aviv.
Deploying any software can be a challenge if you don't understand how resources are used or how to plan for the capacity of your systems. Whether you need to deploy or grow a single MongoDB instance, replica set, or tens of sharded clusters then you probably share the same challenges in trying to size that deployment.
This webinar will cover what resources MongoDB uses, and how to plan for their use in your deployment. Topics covered will include understanding how to model and plan capacity needs for new and growing deployments. The goal of this webinar will be to provide you with the tools needed to be successful in managing your MongoDB capacity planning tasks.
Troubleshooting Kafka's socket server: from incident to resolutionJoel Koshy
LinkedIn’s Kafka deployment is nearing 1300 brokers that move close to 1.3 trillion messages a day. While operating Kafka smoothly even at this scale is testament to both Kafka’s scalability and the operational expertise of LinkedIn SREs we occasionally run into some very interesting bugs at this scale. In this talk I will dive into a production issue that we recently encountered as an example of how even a subtle bug can suddenly manifest at scale and cause a near meltdown of the cluster. We will go over how we detected and responded to the situation, investigated it after the fact and summarize some lessons learned and best-practices from this incident.
In this talk we report on our experience with Redis-on-Flash (RoF)—a recently introduced product that uses SSDs as a RAM extension to dramatically increase the effective dataset capacity that can be stored on a single server. This talk provides the first in-depth RoF system performance characterization: we consider different use cases (varying both RAM-to-disk access ratio and object size), and compare SATA-based RoF, NVMe-based RoF, and all-RAM Redis deployments. We show that the superior performance of NVMe drives in terms of both latency and peak bandwidth makes them a particularly good fit for RoF use cases. Specifically, we show that backing RoF with NVMe drives can deliver more than 2 million operations per second with sub-millisecond latency on a single server.
As a company starts dealing with large amounts of data, operation engineers are challenged with managing the influx of information while ensuring the resilience of data. Hadoop HDFS, Mesos and Spark help reduce issues with a scheduler that allows data cluster resources to be shared. It provides a common ground where data scientists and engineers can meet, develop high performance data processing applications and deploy their own tools.
Introduction to Apache BookKeeper Distributed StorageStreamlio
A brief technical introduction to Apache BookKeeper, the scalable, fault-tolerant, and low-latency storage service optimized for real-time and streaming workloads.
October 2016 HUG: Pulsar, a highly scalable, low latency pub-sub messaging s...Yahoo Developer Network
Yahoo recently open-sourced Pulsar, a highly scalable, low latency pub-sub messaging system running on commodity hardware. It provides simple pub-sub messaging semantics over topics, guaranteed at-least-once delivery of messages, automatic cursor management for subscribers, and cross-datacenter replication. Pulsar is used across various Yahoo applications for large scale data pipelines. Learn more about Pulsar architecture and use-cases in this talk.
Speakers:
Matteo Merli from Pulsar team at Yahoo
Redis in a Multi Tenant Environment–High Availability, Monitoring & Much More! Redis Labs
Running any
application in a multi-tenant environment poses its challenges. This talk is focused around how we at Rackspace run Redis
in a multi-tenant environment, ensuring security, performance, fault tolerance and high availability. This talk will cover: an
architecture deep dive of multi tenant Redis on the cloud, management of sentinels, monitoring and operations of a large
scale Redis deployment,introducing new Redis versions,scaling, security, some lessons learnt. The target audience for this
talk is anyone who is interested in the deployment/operational aspect of running Redis. This is relevant not only for those
who want to run Redis themselves, but also interested in how a Redis provider might be doing it for them.
Apache HBase, Accelerated: In-Memory Flush and Compaction HBaseCon
Eshcar Hillel and Anastasia Braginsky (Yahoo!)
Real-time HBase application performance depends critically on the amount of I/O in the datapath. Here we’ll describe an optimization of HBase for high-churn applications that frequently insert/update/delete the same keys, such as for high-speed queuing and e-commerce.
Speakers: Nick Dimiduk (Hortonworks) and Nicolas Liochon (Scaled Risk)
HBase is an online database so response latency is critical. This talk will examine sources of latency in HBase, detailing steps along the read and write paths. We'll examine the entire request lifecycle, from client to server and back again. We'll also look at the different factors that impact latency, including GC, cache misses, and system failures. Finally, the talk will highlight some of the work done in 0.96+ to improve the reliability of HBase.
Perforce BTrees: The Arcane and the ProfanePerforce
"Get a tour of Perforce BTree history, its behaviors and configuration. Learn about performance alternatives, space management tools and future projects, too."
Work with hundred of hot terabytes in JVMsMalin Weiss
Third-party updates to the database can cause Hazelcast applications to work with data which is out-of-date.
By synchronizing with an underlying database using an SQL Reflector, the Hazelcast Maps will be “alive” and change whenever the underlying data changes. The solution can also automatically derive domain models directly from the database schemas, so that you can start using the solution very quickly and handle extreme volumes of data.
Java one2015 - Work With Hundreds of Hot Terabytes in JVMsSpeedment, Inc.
Presentation Summary: By leveraging on memory mapped files, the Chronicle Engine supports large maps that easily can exceed the size of your server’s RAM, thus allowing application developers to create huge JVM:s where data can be obtained quickly and with predictable latency. The Chronicle Engine can be synchronized with an underlying database using Speedment so that your in-memory maps will be “alive” and change whenever data changes in the underlying database. Speedment can also automatically derive domain models directly from the database so that you can start using the solution very quickly. Because the Java Maps are mapped onto files, the maps can also be shared instantly between several JVM:s and when you restart a JVM, it may start very quickly without having to reload data from the underlying database. The mapped files can be hundreds of terabytes which has been done in real world deployment cases.
From cache to in-memory data grid. Introduction to Hazelcast.Taras Matyashovsky
This presentation:
* covers basics of caching and popular cache types
* explains evolution from simple cache to distributed, and from distributed to IMDG
* not describes usage of NoSQL solutions for caching
* is not intended for products comparison or for promotion of Hazelcast as the best solution
We will show the advantages of having a geo-distributed database cluster and how to create one using Galera Cluster for MySQL. We will also discuss the configuration and status variables that are involved and how to deal with typical situations on the WAN such as slow, untrusted or unreliable links, latency and packet loss. We will demonstrate a multi-region cluster on Amazon EC2 and perform some throughput and latency measurements in real-time (video http://galeracluster.com/videos/using-galera-replication-to-create-geo-distributed-clusters-on-the-wan-webinar-video-3/)
Most mid-sized Django websites thrive by relying on memcached. Though what happens when basic memcached is not enough? And how can one identify when the caching architecture is becoming a bottleneck? We'll cover the problems we've encountered and solutions we've put in place.
Initial deck on WebSphere eXtreme Scale with WebSphere Commerce ServerBilly Newport
This is the deck used to show how IBM WebSphere eXtreme Scale improves the usability of WebSphere Commerce Server by replacing private per JVM disk based caches with a shared datagrid based one for page fragment caching.
Using galera replication to create geo distributed clusters on the wanSakari Keskitalo
We will show the advantages of having a geo-distributed database cluster and how to create one using Galera Cluster for MySQL. We will also discuss the configuration and status variables that are involved and how to deal with typical situations on the WAN such as slow, untrusted or unreliable links, latency and packet loss. We will demonstrate a multi-region cluster on Amazon EC2 and perform some throughput and latency measurements in real-time.
Using galera replication to create geo distributed clusters on the wanSakari Keskitalo
We will show the advantages of having a geo-distributed database cluster and how to create one using Galera Cluster for MySQL. We will also discuss the configuration and status variables that are involved and how to deal with typical situations on the WAN such as slow, untrusted or unreliable links, latency and packet loss. We will demonstrate a multi-region cluster on Amazon EC2 and perform some throughput and latency measurements in real-time.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
2. Overview
• Fatcache is memcache on SSD
• Memcache is volatile in-memory <K, V> cache, whereas
fatcache persists <K,V> on SSD
• Memory is much faster than SSD (~1000X)
• But memory is much costlier as well
• DRAM cost per server increases dramatically beyond 150 GB
• Power cost increases similarly
• Not a viable option to horizontally scale to TBs of data across nodes
• Network-connected SSD design makes sense if network
latencies dominate over SSD latencies by a large factor
3. Latency comparison
• Intel 320 series SSD:
o Read latency: 75 us
o Write latency: 90 us
o Sequential Read Bandwidth: 270 MB/s
o Sequential Write Bandwidth: 220 MB/s
• Memory:
o Latency: 50-70 ns
o Bandwidth: 15-25 GB/s
• Rotational disk:
o Seek time: 3-15 ms
o Data transfer rate: 130 MB/s
4. SSD I/O characteristics
• SSD reads happen at page-level granularity, usually 4 KB.
o Single page read access time is ~70 us. Hence SSD access needs to be
minimized to keep SSD latency under network latency.
o Fatcache reduces SSD reads by maintaining an in-memory
index for all on-disk data.
• SSD writes are essentially erase-and-rewrites.
o In-place updates to SSD degrade performance.
o Small, random writes reduce SSD lifetime and need to be eliminated.
o Fatcache aggregates all writes in memory and write to SSD in
batches in a log structured fashion.
5. Scaling
• Network I/O in fatcache is async, but SSD I/O is sync.
• To exploit full SSD parallelism, we need to run multiple
instances of fatcache against single SSD. Each instance works
on a fixed ‘hard’ partition of SSD.
• Further scale the SSD throughput by scaling the number of
SSDs on a single machine and the number of machines.
6. Accessibility
• Fatcache supports the Memcache protocol to get/set data.
• Storage commands: SET, ADD, REPLACE, APPEND, PREPEND, CAS
• Retrieval commands: GET, GETS
• Delete command: DELETE
• Arithmetic commands: INCR, DECR
• Quit command: QUIT
• Clients support at least one method of hashing keys among
servers
• Clients should support consistent hashing scheme to handle
scenarios of server additions/removals.
7. Durability
• Fatcache is NOT a <K,V> store
• Any item written to fatcache is subject to cache eviction
• Capacity-triggered eviction happens if at the time of adding a
new item, there are no free chunks and no free pages
available on SSD
• Page level eviction would result in a cache miss if client needs
to access a key belonging to that page
• Ideally server should expose various stats that helps figure out
whether fatcache is frequently peaking capacity and needs
scaling out
o Currently no observability through stats
8. Availability
• If a fatcache instance becomes unavailable, client can take
either of two approaches – failover or failure
• Failover: If client supports consistent hashing, it would
‘reroute’ the request to next available instance in the list
• Initially client would have to deal with cache miss
• Client can choose to start updating keys in the new instance
• When the failed instance comes back, client would start
seeing older versions since request gets ‘rerouted’ back to the
original instance
o Any updates made during failover will not be visible after restart
9. Availability
• Failure: In this approach, client can simply treat server
unavailable scenario as a cache miss
• Depending upon cache miss strategy, client can choose to
connect to secondary store until server comes back
• Manual monitoring is needed to detect failures and restart
instances quickly
• Restart will load the snapshot of data last persisted to SSD
o All pending writes batched in-memory last time will be lost
10. Performance
• Published performance results:
o A single fatcache can do close to 100K set/sec for 100 byte item sizes.
o A single fatcache can do close to 4.5K get/sec for 100 byte item sizes.
o All 8 fatcache instances in aggregate do 32K get/sec to a single 600 GB
SSD.
• Need to run our own in-house tests to get real
numbers.