The presentation explains the reasons we picked Kafka as Streaming Hub and the use of Kafka Streams to avoid common anti-patterns, streamline development experience, improve resilience, enhance performances and enable experimentation. A step-by-step example will be presented to introduce the Kafka Streams DSL and understand what happens under the hood of a stateful streaming application.
MongoDB 3.0, Wired Tiger, and the era of pluggable storage engines
With MongoDB 3.0 the Wired Tiger storage engine will be included. In addition, 3rd party pluggable storage engines are possible as well. Kenny will present some performance benchmarks, show typical configuration options, and help attendees make sense of these new changes and how the effect MongoDB workloads. He will detail the various components of the Wired Tiger engine and the impact they make on overall performance.
Kenny will share benchmarks, code and general tunables for the Wired Tiger engine and more.
Netflix created and open sourced Dynomite project to provide reusable distributed database infrastructure that turns single server data stores into scalable, distributed databases. Dynomite supports pluggable protocols and pluggable storage engines, which allows us to add sharding and replication to a variety of non-distributed data stores. The entire database infrastructure can be reused across a variety of workloads from in-memory to on-disk, and across APIs from key/value to document databases. Dynomite allows application developers to choose the API that best fits their requirements, while DevOps can select the best operation database based on the workload. Dynomite is used by Netflix to handle millions of OPS in production leveraging Redis and RocksDB. In this talk, we are going to show how we achieved high availability by being able to terminate any Dynomite node without client side downtime, best practices and the challenges in deploying Dynomite in production.
The presentation explains the reasons we picked Kafka as Streaming Hub and the use of Kafka Streams to avoid common anti-patterns, streamline development experience, improve resilience, enhance performances and enable experimentation. A step-by-step example will be presented to introduce the Kafka Streams DSL and understand what happens under the hood of a stateful streaming application.
MongoDB 3.0, Wired Tiger, and the era of pluggable storage engines
With MongoDB 3.0 the Wired Tiger storage engine will be included. In addition, 3rd party pluggable storage engines are possible as well. Kenny will present some performance benchmarks, show typical configuration options, and help attendees make sense of these new changes and how the effect MongoDB workloads. He will detail the various components of the Wired Tiger engine and the impact they make on overall performance.
Kenny will share benchmarks, code and general tunables for the Wired Tiger engine and more.
Netflix created and open sourced Dynomite project to provide reusable distributed database infrastructure that turns single server data stores into scalable, distributed databases. Dynomite supports pluggable protocols and pluggable storage engines, which allows us to add sharding and replication to a variety of non-distributed data stores. The entire database infrastructure can be reused across a variety of workloads from in-memory to on-disk, and across APIs from key/value to document databases. Dynomite allows application developers to choose the API that best fits their requirements, while DevOps can select the best operation database based on the workload. Dynomite is used by Netflix to handle millions of OPS in production leveraging Redis and RocksDB. In this talk, we are going to show how we achieved high availability by being able to terminate any Dynomite node without client side downtime, best practices and the challenges in deploying Dynomite in production.
Kafka Streams - From the Ground Up to the CloudVMware Tanzu
SpringOne Platform 2017
Marius Bogoevici, Redhat
In this session we will introduce the Kafka Streams API and the Kafka Streams processing engine, followed by the Kafka Streams support in the Spring portfolio - showing how to easily write and deploy Kafka Streams applications using Spring Cloud Stream and deploy them on various cloud platforms using Spring Cloud Data Flow.
Migrating Data Pipeline from MongoDB to CassandraDemi Ben-Ari
MongoDB is a great NoSQL database, it’s very flexible and easy to use,
but would it handle massive Read / Write throughput?
actually, what happens when you need to scale everything out and easily?
We will lay out the reasons and the steps of migrating our data pipeline to Apache Cassandra in a short period without having any prior knowledge.
We’ll list our lessons learned as well.
Bio:
Demi Ben-Ari, Sr. Data Engineer @Windward,
I have over 9 years of experience in building various systems both from the field of near real time applications and Big Data distributed systems.
Co-Organizer of the “Big Things” Big Data community:http://somebigthings.com/big-things-intro/
Cloud database vendors tend to report performance numbers for the sweet spot or running in highly optimized hardware with specific workload parameters.
Moreover, many of these systems are not tested under different failure scenarios that may appear in the public cloud.
At Netflix, as a cloud-native enterprise, our focus is on high availability. We achieve high availability by deploying at multiple regions.
Hence, our data store system performance is highly affected by our global deployment model, instance types and workload patterns.
Hence, we were interested in a Cloud database benchmark tool that could be deployed in a loosely-coupled fashion, as a microservice, and with the ability to dynamically change the configurations parameters at run time. In this paper, we present Netflix Data Benchmark (NDBench). NDBench offers pluggable patterns and loads and support for different client APIs. It offers the ability to deploy, manage and monitor multiple instances from a single point. NDBench was designed to run for infinite time. This nature offered us with the ability to test long-running database maintenance jobs, test database systems under conditions that affect the performance, such as compactions, repairs etc., and also gives us a view on client side issues like memory leaks and heap pressure. We have been running NDBench for almost 3 years having validated multiple database versions, tested numerous NoSQL systems running on the Cloud, and tested new functionalities. NDBench is major component of our testing and validation pipelines.
At TiDB DevCon 2020, Max Liu, CEO at PingCAP, gave a keynote speech. He believes that today’s database should be more real-time, more flexible, and easier to use, and TiDB, an elastic, cloud-native, real-time HTAP database, is exactly that kind of database.
In Apache Cassandra Lunch #59: Functions in Cassandra, we discussed the functions that are usable inside of the Cassandra database. The live recording of Cassandra Lunch, which includes a more in-depth discussion and a demo, is embedded below in case you were not able to attend live.
Introducing TiDB [Delivered: 09/27/18 at NYC SQL Meetup]Kevin Xu
This presentation was delivered at the NYC SQL meetup on September 27, 2018. It provided a technical overview of the TiDB Platform, a deep dive into TiDB's MySQL compatible layer and MySQL ecosystem tools, use case of Mobike, and appendix with detail materials on coprocessor and transaction model.
Ryan will expand on his popular blog series and drill down into the internals of the database. Ryan will discuss optimizing query performance, best indexing schemes, how to manage clustering (including meta and data nodes), the impact of IFQL on the database, the impact of cardinality on performance, TSI, and other internals that will help you architect better solutions around InfluxDB.
Why You Definitely Don’t Want to Build Your Own Time Series DatabaseInfluxData
At Outlyer, an infrastructure monitoring tool, we had to build our own TSDB back in 2015 to support our service. Two years later, we decided to take a different direction after seeing for ourselves how hard it is to build and scale a TSDB. This talk will review our journey, the challenges we hit trying to scale a TSDB for large customers and hopefully talk some people out of trying to build one themselves because it is not easy!
Saratov open it teach talk.
Дамир Яраев:
Введение в Apache Cassandra (В ходе презентации Дамир расскажет, когда и почему стоит переходить с проверенных временем реляционных баз данных на ставшие модными в последнее время решения на базе NoSQL. В качестве примера рассмотрит колоночную NoSQL базу данных Apache Cassandra)
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to StreamingYaroslav Tkachenko
Activision Data team has been running a data pipeline for a variety of Activision games for many years. Historically we used a mix of micro-batch microservices coupled with classic Big Data tools like Hadoop and Hive for ETL. As a result, it could take up to 4-6 hours for data to be available to the end customers.
In the last few years, the adoption of data in the organization skyrocketed. We needed to de-legacy our data pipeline and provide near-realtime access to data in order to improve reporting, gather insights faster, power web and mobile applications. I want to tell a story about heavily leveraging Kafka Streams and Kafka Connect to reduce the end latency to minutes, at the same time making the pipeline easier and cheaper to run. We were able to successfully validate the new data pipeline by launching two massive games just 4 weeks apart.
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...HostedbyConfluent
Activision Data team has been running a data pipeline for a variety of Activision games for many years. Historically we used a mix of micro-batch microservices coupled with classic Big Data tools like Hadoop and Hive for ETL. As a result, it could take up to 4-6 hours for data to be available to the end customers.
In the last few years, the adoption of data in the organization skyrocketed. We needed to de-legacy our data pipeline and provide near-realtime access to data in order to improve reporting, gather insights faster, power web and mobile applications. I want to tell a story about heavily leveraging Kafka Streams and Kafka Connect to reduce the end latency to minutes, at the same time making the pipeline easier and cheaper to run. We were able to successfully validate the new data pipeline by launching two massive games just 4 weeks apart.
Kafka Streams - From the Ground Up to the CloudVMware Tanzu
SpringOne Platform 2017
Marius Bogoevici, Redhat
In this session we will introduce the Kafka Streams API and the Kafka Streams processing engine, followed by the Kafka Streams support in the Spring portfolio - showing how to easily write and deploy Kafka Streams applications using Spring Cloud Stream and deploy them on various cloud platforms using Spring Cloud Data Flow.
Migrating Data Pipeline from MongoDB to CassandraDemi Ben-Ari
MongoDB is a great NoSQL database, it’s very flexible and easy to use,
but would it handle massive Read / Write throughput?
actually, what happens when you need to scale everything out and easily?
We will lay out the reasons and the steps of migrating our data pipeline to Apache Cassandra in a short period without having any prior knowledge.
We’ll list our lessons learned as well.
Bio:
Demi Ben-Ari, Sr. Data Engineer @Windward,
I have over 9 years of experience in building various systems both from the field of near real time applications and Big Data distributed systems.
Co-Organizer of the “Big Things” Big Data community:http://somebigthings.com/big-things-intro/
Cloud database vendors tend to report performance numbers for the sweet spot or running in highly optimized hardware with specific workload parameters.
Moreover, many of these systems are not tested under different failure scenarios that may appear in the public cloud.
At Netflix, as a cloud-native enterprise, our focus is on high availability. We achieve high availability by deploying at multiple regions.
Hence, our data store system performance is highly affected by our global deployment model, instance types and workload patterns.
Hence, we were interested in a Cloud database benchmark tool that could be deployed in a loosely-coupled fashion, as a microservice, and with the ability to dynamically change the configurations parameters at run time. In this paper, we present Netflix Data Benchmark (NDBench). NDBench offers pluggable patterns and loads and support for different client APIs. It offers the ability to deploy, manage and monitor multiple instances from a single point. NDBench was designed to run for infinite time. This nature offered us with the ability to test long-running database maintenance jobs, test database systems under conditions that affect the performance, such as compactions, repairs etc., and also gives us a view on client side issues like memory leaks and heap pressure. We have been running NDBench for almost 3 years having validated multiple database versions, tested numerous NoSQL systems running on the Cloud, and tested new functionalities. NDBench is major component of our testing and validation pipelines.
At TiDB DevCon 2020, Max Liu, CEO at PingCAP, gave a keynote speech. He believes that today’s database should be more real-time, more flexible, and easier to use, and TiDB, an elastic, cloud-native, real-time HTAP database, is exactly that kind of database.
In Apache Cassandra Lunch #59: Functions in Cassandra, we discussed the functions that are usable inside of the Cassandra database. The live recording of Cassandra Lunch, which includes a more in-depth discussion and a demo, is embedded below in case you were not able to attend live.
Introducing TiDB [Delivered: 09/27/18 at NYC SQL Meetup]Kevin Xu
This presentation was delivered at the NYC SQL meetup on September 27, 2018. It provided a technical overview of the TiDB Platform, a deep dive into TiDB's MySQL compatible layer and MySQL ecosystem tools, use case of Mobike, and appendix with detail materials on coprocessor and transaction model.
Ryan will expand on his popular blog series and drill down into the internals of the database. Ryan will discuss optimizing query performance, best indexing schemes, how to manage clustering (including meta and data nodes), the impact of IFQL on the database, the impact of cardinality on performance, TSI, and other internals that will help you architect better solutions around InfluxDB.
Why You Definitely Don’t Want to Build Your Own Time Series DatabaseInfluxData
At Outlyer, an infrastructure monitoring tool, we had to build our own TSDB back in 2015 to support our service. Two years later, we decided to take a different direction after seeing for ourselves how hard it is to build and scale a TSDB. This talk will review our journey, the challenges we hit trying to scale a TSDB for large customers and hopefully talk some people out of trying to build one themselves because it is not easy!
Saratov open it teach talk.
Дамир Яраев:
Введение в Apache Cassandra (В ходе презентации Дамир расскажет, когда и почему стоит переходить с проверенных временем реляционных баз данных на ставшие модными в последнее время решения на базе NoSQL. В качестве примера рассмотрит колоночную NoSQL базу данных Apache Cassandra)
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to StreamingYaroslav Tkachenko
Activision Data team has been running a data pipeline for a variety of Activision games for many years. Historically we used a mix of micro-batch microservices coupled with classic Big Data tools like Hadoop and Hive for ETL. As a result, it could take up to 4-6 hours for data to be available to the end customers.
In the last few years, the adoption of data in the organization skyrocketed. We needed to de-legacy our data pipeline and provide near-realtime access to data in order to improve reporting, gather insights faster, power web and mobile applications. I want to tell a story about heavily leveraging Kafka Streams and Kafka Connect to reduce the end latency to minutes, at the same time making the pipeline easier and cheaper to run. We were able to successfully validate the new data pipeline by launching two massive games just 4 weeks apart.
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...HostedbyConfluent
Activision Data team has been running a data pipeline for a variety of Activision games for many years. Historically we used a mix of micro-batch microservices coupled with classic Big Data tools like Hadoop and Hive for ETL. As a result, it could take up to 4-6 hours for data to be available to the end customers.
In the last few years, the adoption of data in the organization skyrocketed. We needed to de-legacy our data pipeline and provide near-realtime access to data in order to improve reporting, gather insights faster, power web and mobile applications. I want to tell a story about heavily leveraging Kafka Streams and Kafka Connect to reduce the end latency to minutes, at the same time making the pipeline easier and cheaper to run. We were able to successfully validate the new data pipeline by launching two massive games just 4 weeks apart.
How QBerg scaled to store data longer, query it fasterMariaDB plc
The continuous increase in terms of services and countries to which QBerg delivers its services requires an ever-increasing load of resources. During the last year QBerg has reached a critical point, storing so much transactional data that standard relational databases were unable to meet the SLAs, or support the features, required by customers. As an example, they had to cap web analytics to running on a maximum of four months of history. The introduction of MariaDB ColumnStore, flanked by existing MariaDB Server databases, not only will allow them to store multiple years’ worth of historical data for analytics – it decreased overall processing time by one order of magnitude right off the bat. The move to a unified platform was incremental, using MariaDB MaxScale as both a router and a replicator. QBerg is now able to replicate full InnoDB schemas to MariaDB ColumnStore and incrementally update big tables without impacting the performance of ongoing transactions.
Empowering the AWS DynamoDB™ application developer with AlternatorScyllaDB
Getting started with AWS DynamoDB™ is famously easy, but as an application grows and evolves it often starts to struggle with DynamoDB’s limitations. We introduce Scylla’s Alternator, which provides the same API as DynamoDB but aims to empower the application developer. In this presentation we will survey some of Alternator’s developer-centered features: Alternator lets you test and eventually deploy your application anywhere, on any public cloud or private cluster. It efficiently supports multiple tables so it does not require difficult single-table design. Finally, Alternator provides the developer with strong observability tools. The insights provided by these tools can detect bottlenecks, improve performance and even lower its cost.
Presentation given at the GoSF meetup on July 20, 2016. It was also recorded on BigMarker here: https://www.bigmarker.com/remote-meetup-go/GoSF-EVCache-Peripheral-I-O-Building-Origin-Cache-for-Images
High Performance Object Pascal Code on Servers (at EKON 22)Arnaud Bouchez
This EKON 22 conference is about high performance on servers, written in the object pascal (Delphi / FPC) language. Profiling should be the first step to avoid premature optimization, which is the root of all evil (Knuth). But when some bottlenecks are identified, we introduce some simple architecture patterns (like caching or microservices), data structures and algorithms to make process actually faster, with minimal refactoring. It was a fun session about how to write faster code, ending up by looking at the Delphi CPU view – even if you don’t know assembly.
Dyn delivers exceptional Internet Performance. Enabling high quality services requires data centers around the globe. In order to manage services, customers need timely insight collected from all over the world. Dyn uses DataStax Enterprise (DSE) to deploy complex clusters across multiple datacenters to enable sub 50 ms query responses for hundreds of billions of data points. From granular DNS traffic data, to aggregated counts for a variety of report dimensions, DSE at Dyn has been up since 2013 and has shined through upgrades, data center migrations, DDoS attacks and hardware failures. In this webinar, Principal Engineers Tim Chadwick and Rick Bross cover the requirements which led them to choose DSE as their go-to Big Data solution, the path which led to SPARK, and the lessons that we’ve learned in the process.
Big Data is everywhere these days. But what is it and how can you use it to fuel your business? Data is as important to organizations as labour and capital, and if organizations can effectively capture, analyze, visualize and apply big data insights to their business goals, they can differentiate themselves from their competitors and outperform them in terms of operational efficiency and the bottom line.
Join this session to understand the different AWS Big Data and Analytics services such as Amazon Elastic MapReduce (Hadoop), Amazon Redshift (Data Warehouse) and Amazon Kinesis (Streaming), when to use them and how they work together.
Reasons to attend:
Learn how AWS can help you process and make better use of your data with meaningful insights.
Learn about Amazon Elastic MapReduce and Amazon Redshift, fully managed petabyte-scale data warehouse solutions.
Learn about real time data processing with Amazon Kinesis.
Benchmarking your cloud performance with top 4 global public cloudsdata://disrupted®
In this presentation, we will present the performance measurement metrics of leading cloud providers - AWS, Google Cloud, Microsoft Azure, and Digital Ocean. We’ll give you useful tools to measure your own cloud performance and a handy guide on how to calculate cloud TCO (total cost of ownership). In addition, you’ll learn how to estimate correctly your market positioning and perform better than the cloud giants.
Boyan Krosnov is a Co-Founder and Chief Product Officer of StorPool Storage. He has been part of the technical teams building 5 service providers from scratch in 4 countries. In most of these projects, he has designed the architecture, led the technical teams, and managed the implementation of projects in the millions.
Kaseya Connect 2013: Optimizing Your K Server - Best Practices in Kaseya Infr...Kaseya
Do you think you have maximized your Kaseya Server for your current environment? Are you running into performance issues that are difficult to address? Are you planning for future grown? Well this session is what you were looking for! Join us in this technical session as you hear from Kaseya Experts and how they have tuned Kaseya to scale and manage thousands of devices on a single virtual machine including IIS, SQL and Kaseya specific optimization techniques.
"Introduction to Kx Technology", James Corcoran, Head of Engineering EMEA at ...Dataconomy Media
"Introduction to Kx Technology", James Corcoran, Head of Engineering EMEA at First Derivatives
About the Author:
James is Senior Vice President, Fast Data Solutions at Kx where he has worked as a developer since 2009. In his career to date, he has worked in the algorithmic trading space at many of the world’s top financial institutions using Kx - a low latency technology for analysing time series data. He is a certified Professional Risk Manager and holds a masters in Quantitative Finance from University College Dublin. In recent years he has built systems for clients ranging from start-ups to blue chip companies in data intensive industries such as pharma, utilities and telco.
Learn how Aerospike's Hybrid Memory Architecture brings transactions and analytics together to power real-time Systems of Engagement ( SOEs) for companies across AdTech, financial services, telecommunications, and eCommerce. We take a deep dive into the architecture including use cases, topology, Smart Clients, XDR and more. Aerospike delivers predictable performance, high uptime and availability at the lowest total cost of ownership (TCO).
Pmemkv is an open source, key-value store for persistent memory based on the Persistent Memory Development Kit (PMDK). Written in C and C++, it provides optimized bindings for Java*, Javascript*, and Ruby on Rails*), and includes multiple storage engines for different use cases.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
2. Why kdb+
• Uses Vector based q language Lesser LoCs and Execution Time
• Designed for synthesizing Markets (read huge and diverse) data Kdb+/tick
(consists of) --- Ticker plant/Publisher (publishes to) --- Real Time Database
(at the eod) --- Historical Database
• Analytics in C, C++, C#, Python, Java, etc could be combined to work on huge
data sets
• Getting data from multiple sources and in multiple formats
• Support for several Database Connectivity standards
• Building analytics on triggers No ill effect in speed/performance
• Extremely robust in converting data to/from different types (such as
generating XMLs, encryption, producing formats for 802.11 x standards)
• Kdb+ is only 200K in size
• Comes with a built-in web server; can be used to return query results in
various formats – XML, CSV, HTML,TXT
• Numerous APIs to Java, Cpp, Python etc
3. q
• SQL + Time series analysis
• Data stored in an ordered form Queries are simple
For eg: select the closing price for each stock by date -
select last price by date, sym from trade
• Date sub-parts are enumerated Could take part in aggregations
For eg: ten minute roll-ups on a stock -
select last price, sum size by 10 xbar time.minute from trade where
sym=`MSFT ‘
• Number of Built-in functions Less programming, less objects
volume weighted average price –
select size wavg price from trade where sym = ‘MSFT’
• Efficient server side programming Minimize amount of data across
networks
max price from a trade table for each symbol for one day of trading –
select max price by sym from trade
• Create Dedicated servers for running heavy queries
4. Getting Started
• Download - http://bit.ly/vbRQa
• Create a directory under C: called q, and add
‘C:qw32’ in $PATH (Put the content of
“windows” folder in q)
5. SDE
• Use to exit a q session
• Results could me moved to a web browser, or
excel
• Script files (*.q) are preferred – You write your
script in the file and execute it from the q shell
command line
For eg: l [pathname]trade.q
• Download q-Console IDE - http://bit.ly/BSsUq