This document discusses benchmarking Riak and provides an overview of benchmarking best practices. It describes the different types of benchmarks, including throughput and latency tests. The document outlines the steps to benchmarking, including starting a test cluster, configuring a test, running the test, and generating graphs to analyze results. It introduces the basho_bench tool for benchmarking and provides examples of key and value distributions. Some challenges of benchmarking like designing accurate tests and accounting for system limits are also covered. The document recommends conducting application-specific benchmarks based on real usage patterns.
Johann-Peter Hartmann from Mayflower GmbH gave a presentation on practical DevOps for developers. He discussed how development used to involve maintaining golden images and static configurations, which became unreliable as applications changed. He then showed how developers can now use tools like Vagrant and Puppet/Chef to manage development environments and configurations as code. This allows for faster setup times, consistency across environments, and easier version management.
Johann-Peter Hartmann from Mayflower GmbH gave a presentation on practical DevOps for developers. The presentation covered three main points: (1) managing development environments with Vagrant and VeeWee, (2) managing configuration with Puppet or Chef, and (3) including the configuration as part of source code by placing Puppet/Chef files in a "configuration" folder within the code repository. The goal is to enable reliable and repeatable development environments that are consistent with production.
Can you upgrade to Puppet 4.x? (Beginner) Can you upgrade to Puppet 4.x? (Beg...Puppet
This document discusses upgrading Puppet code to work with newer versions of Puppet, specifically Puppet 4. It outlines reasons to upgrade like getting security updates and new features. It provides tips for testing code like using rspec and additional Puppet master processes. Code practices that are deprecated in newer versions are identified like inheritance and modifying remote modules. The presentation demonstrates upgrading a module to Puppet 4.
This document provides an overview of Apache Camel, an open source integration framework. It discusses Camel's architecture, including routes, endpoints and components. It also describes Camel's domain specific language for defining routes. Finally, it provides a sample route that reads an XML file from FTP, transforms it using XSLT, and sends it to a JMS queue.
The document summarizes TorqueBox, which allows Ruby applications to run on the JBoss Application Server. TorqueBox combines JRuby and JBoss AS to provide features like clustering, load balancing, high availability, messaging, background jobs, and long-running services to Ruby applications. It allows Ruby applications to leverage Java libraries and tools while retaining the simplicity and flexibility of Ruby.
TorqueBox allows developers to build and deploy Rack and JRuby applications on JBoss Application Server. It provides features such as background processing, scheduling, services and clustering out of the box. TorqueBox makes use of JRuby's Java integration to provide a fast Ruby runtime and access to Java libraries and tools. It aims to provide all the capabilities of a full application server while allowing developers to work in Ruby and avoid technologies like XML, Java code and WAR files. Setting up TorqueBox involves downloading the distribution, exporting some environment variables and using Rake tasks to deploy and manage applications.
With more businesses moving to cloud-based solutions everyday, we must re-think the strategies used to deploy Perl applications and related libraries, given the volatile aspects of the cloud and its constraints.
In this talk I go over the challenges posed by virtualised environments, and consider several solutions to them. The use cases are all related to Amazon's EC2, but will easily be adapted for GoGrid, Mosso, and others.
Johann-Peter Hartmann from Mayflower GmbH gave a presentation on practical DevOps for developers. He discussed how development used to involve maintaining golden images and static configurations, which became unreliable as applications changed. He then showed how developers can now use tools like Vagrant and Puppet/Chef to manage development environments and configurations as code. This allows for faster setup times, consistency across environments, and easier version management.
Johann-Peter Hartmann from Mayflower GmbH gave a presentation on practical DevOps for developers. The presentation covered three main points: (1) managing development environments with Vagrant and VeeWee, (2) managing configuration with Puppet or Chef, and (3) including the configuration as part of source code by placing Puppet/Chef files in a "configuration" folder within the code repository. The goal is to enable reliable and repeatable development environments that are consistent with production.
Can you upgrade to Puppet 4.x? (Beginner) Can you upgrade to Puppet 4.x? (Beg...Puppet
This document discusses upgrading Puppet code to work with newer versions of Puppet, specifically Puppet 4. It outlines reasons to upgrade like getting security updates and new features. It provides tips for testing code like using rspec and additional Puppet master processes. Code practices that are deprecated in newer versions are identified like inheritance and modifying remote modules. The presentation demonstrates upgrading a module to Puppet 4.
This document provides an overview of Apache Camel, an open source integration framework. It discusses Camel's architecture, including routes, endpoints and components. It also describes Camel's domain specific language for defining routes. Finally, it provides a sample route that reads an XML file from FTP, transforms it using XSLT, and sends it to a JMS queue.
The document summarizes TorqueBox, which allows Ruby applications to run on the JBoss Application Server. TorqueBox combines JRuby and JBoss AS to provide features like clustering, load balancing, high availability, messaging, background jobs, and long-running services to Ruby applications. It allows Ruby applications to leverage Java libraries and tools while retaining the simplicity and flexibility of Ruby.
TorqueBox allows developers to build and deploy Rack and JRuby applications on JBoss Application Server. It provides features such as background processing, scheduling, services and clustering out of the box. TorqueBox makes use of JRuby's Java integration to provide a fast Ruby runtime and access to Java libraries and tools. It aims to provide all the capabilities of a full application server while allowing developers to work in Ruby and avoid technologies like XML, Java code and WAR files. Setting up TorqueBox involves downloading the distribution, exporting some environment variables and using Rake tasks to deploy and manage applications.
With more businesses moving to cloud-based solutions everyday, we must re-think the strategies used to deploy Perl applications and related libraries, given the volatile aspects of the cloud and its constraints.
In this talk I go over the challenges posed by virtualised environments, and consider several solutions to them. The use cases are all related to Amazon's EC2, but will easily be adapted for GoGrid, Mosso, and others.
Bob McWhirter gave a presentation on TorqueBox, an open-source Ruby application server built on the Java Virtual Machine. Some key points:
- TorqueBox allows Ruby applications like Rails to take advantage of features traditionally provided by Java application servers like scalability, messaging, jobs, and telephony.
- It provides queues for asynchronous processing and scheduling jobs to run on a cron-like schedule directly from Ruby classes.
- The use of the Java VM allows clustering and high availability of Ruby applications in the same way achieved with Java applications.
- All components like queues, jobs, and clustering work seamlessly together since everything is integrated within TorqueBox.
Toby Crawley gives an overview of TorqueBox, an open-source application server for Ruby that is based on JBoss Application Server and JRuby. He discusses how TorqueBox allows Ruby applications to take advantage of features like background processing, scheduling, daemons, and clustering. He also provides instructions for installing TorqueBox and deploying Ruby applications on it using rake tasks and deployment descriptors.
This document discusses using DataMapper with Infinispan as a clustered NoSQL data store. It covers:
- DataMapper is a Ruby ORM that can use Infinispan as its data adapter through the dm-infinispan-adapter gem.
- Infinispan is a highly scalable, distributed Java cache that provides a data grid. It supports replication, distribution and local caching.
- The dm-infinispan-adapter allows DataMapper objects to be stored in Infinispan, enabling a clustered NoSQL backend for Ruby applications. It generates runtime annotations to integrate with Hibernate Search.
This document discusses CoffeeScript syntax for variables, functions, conditionals, operators, arrays, objects, and iteration. It provides examples of how to write these concepts in CoffeeScript compared to JavaScript. Key points covered include using CoffeeScript's syntax for functions without parentheses, optional parentheses for function calls, removing semicolons, and using list comprehensions for iteration and filtering arrays.
Apache Camel is a powerful open source integration framework that allows developers to focus on business logic by hiding complexity. It supports over 80 components and 19 data formats, and provides a domain-specific language for integration patterns in Java, XML, and Scala. Camel routes can be run in standalone applications or deployed to various containers.
SymfonyCon Madrid 2014 - Rock Solid Deployment of Symfony AppsPablo Godel
Web applications are becoming increasingly more complex, so deployment is not just transferring files with FTP anymore. We will go over the different challenges and how to deploy our PHP applications effectively, safely and consistently with the latest tools and techniques. We will also look at tools that complement deployment with management, configuration and monitoring.
The document discusses the past, present, and future of jBPM. It describes how jBPM has evolved from version 3.x, which had many API choices and implementation challenges, to version 4.x, which resolves past issues. It outlines migration scenarios between versions and resources for learning more about jBPM. It concludes by announcing a surprise at the upcoming JFall conference - that attendees should hug a jBPM developer.
The document discusses Drools and the JBoss Business Rules Management System (BRMS), including an overview of concepts like rules, facts, and the runtime execution environment. It also covers authoring rules with the guided rule editor in the web interface or with DRL, and integrating rules with Spring and Camel frameworks at runtime.
London Atlassian User Group - February 2014Steve Smith
Continuous deployment is causing organisations to rethink how they build and release software. Atlassian Bamboo is rapidly adding features to help with automating deployment, but there are a lot of other practical and organisational issues that need to be addressed when adopting this development model. The Atlassian business-platforms team has been dealing with these issues over the last few months as we transition our order system to continuous deployment. This talk will cover why we adopted this model, some of challenges we encountered, and the approaches and tools we used to overcome them.
Bootstrapping Puppet and Application Deployment - PuppetConf 2013Puppet
"Bootstrapping Puppet and Application Deployment" by Robert de Macedo Soares, Application Security Engineer, BusinessWire.
Presentation Overview: A dive into the problems faced when first launching Puppet across existing, heterogeneous servers, outlining possible solutions using our experience as an example. In addition, this session will touch on application management and deployment using subversion and rake tasks, what works and what is a little rough around the edges.
Speaker Bio: Robert is an engineer who has spent the past several years attempting to automate away the need for the work that he does. Focusing on server automation and security work for BusinessWire, Robert also develops web services such as tee.ms, a chat service, and designs and develops games. Trism, which he co-designed, was nominated for Cellular Game of the Year by the Academy of Interactive Arts & Sciences in the 2009 Interactive Achievement Awards.
This document provides an overview of profiling PHP applications for performance. It begins by discussing common myths about PHP optimizations that provide little real performance benefit. Effective profiling is based on measuring actual performance results using tools. The document outlines different profiling modes for normal development and emergency situations. It then describes various tools that can be used to profile different parts of a PHP application, including the browser, web server, PHP code, database, and operating system. It emphasizes finding and addressing bottlenecks. The document concludes by offering advice like avoiding premature optimization, understanding problems fully before attempting to fix them, and asking others for help.
Leonid Mazur has experience testing web applications and web services at companies like Yuzu, Become, Yahoo, Talent6, PayPal, and CafePress. He focuses on automated client side testing using Selenium and server side testing using Perl/Ruby scripts. For database testing, he writes scripts to verify data integrity and consistency with source feeds.
Puppet Camp Duesseldorf 2014: Martin Alfke - Can you upgrade to puppet 4.x?NETWAYS
PuppetLabs takes care on the Puppet software stack and they provide regular updates of their software.
But how about your Puppet DSL code? How can you ensure that your code will also work fine on newer Puppet versions?
This talks shows basic steps and actions which should be done to ensure fully functional Puppet DSL code on newer Puppet versions.
I will show common old practices, which have been replaced by more modern ways in using Puppet and how to migrate to the new solution. Additionally I want you to learn how you can test your Puppet DSL code prior putting it onto a new Puppet master.
Spot Trading - A case study in continuous delivery for mission critical finan...SaltStack
This is a presentation given by Jeremy Alons, Spot Trading, at the DevOps Summit Chicago in August 2014. Jeremy shares how Spot Trading does automated deployments for mission-critical financial services with a case study in continuous delivery.
Writing High-Performance Software by Arvid Norbergbittorrentinc
The document discusses techniques for writing high-performance software, focusing on optimizing memory access and reducing context switching. It covers CPU memory hierarchies, data structures, and socket programming. Some key points include organizing data sequentially in memory to improve cache hits, batching work to amortize context switching costs, and using asynchronous I/O to avoid blocking threads on disk or network operations.
Practical Continuous Deployment - Atlassian - London AUG 18 Feb 2014Matthew Cobby
The document discusses practical approaches to implementing continuous deployment. It describes converting an organization's internal systems to continuous delivery and deployment over six months to address integration issues. Continuous deployment aims to release features, not unfinished work, through automation that makes releasing repeatable. Stakeholders benefit from faster delivery of features to customers and clearer progress signals. The document outlines a development workflow involving tracking requests, branching per feature, automated testing, code reviews, merging to a release branch, and deploying to staging and production. It also addresses challenges of automation and coordination across servers for the "last mile" of deployment.
Migrating to a bazel based CI system: 6 learnings Or Shachar
Two years ago, we were given a big challenge - Transform Wix Build System, then based on Maven and Teamcity, to a new system that will support our exponentially growing scale.
But, how could we move to a system so different in so many ways than the existing one? Furthermore, we were required not to break the current build system, as we migrate to the new one.
Fast forward to today: Wix backend CI system is fully migrated to Bazel! The system builds in a fracture of the time - even with our largest codebases. In this talk, we will describe how we achieved this, why it took us so long, what tools we had to build on the way (and what we already have, and will, open source!), and share the principles that helped us.
Bob McWhirter gave a presentation on TorqueBox, an open-source Ruby application server built on the Java Virtual Machine. Some key points:
- TorqueBox allows Ruby applications like Rails to take advantage of features traditionally provided by Java application servers like scalability, messaging, jobs, and telephony.
- It provides queues for asynchronous processing and scheduling jobs to run on a cron-like schedule directly from Ruby classes.
- The use of the Java VM allows clustering and high availability of Ruby applications in the same way achieved with Java applications.
- All components like queues, jobs, and clustering work seamlessly together since everything is integrated within TorqueBox.
Toby Crawley gives an overview of TorqueBox, an open-source application server for Ruby that is based on JBoss Application Server and JRuby. He discusses how TorqueBox allows Ruby applications to take advantage of features like background processing, scheduling, daemons, and clustering. He also provides instructions for installing TorqueBox and deploying Ruby applications on it using rake tasks and deployment descriptors.
This document discusses using DataMapper with Infinispan as a clustered NoSQL data store. It covers:
- DataMapper is a Ruby ORM that can use Infinispan as its data adapter through the dm-infinispan-adapter gem.
- Infinispan is a highly scalable, distributed Java cache that provides a data grid. It supports replication, distribution and local caching.
- The dm-infinispan-adapter allows DataMapper objects to be stored in Infinispan, enabling a clustered NoSQL backend for Ruby applications. It generates runtime annotations to integrate with Hibernate Search.
This document discusses CoffeeScript syntax for variables, functions, conditionals, operators, arrays, objects, and iteration. It provides examples of how to write these concepts in CoffeeScript compared to JavaScript. Key points covered include using CoffeeScript's syntax for functions without parentheses, optional parentheses for function calls, removing semicolons, and using list comprehensions for iteration and filtering arrays.
Apache Camel is a powerful open source integration framework that allows developers to focus on business logic by hiding complexity. It supports over 80 components and 19 data formats, and provides a domain-specific language for integration patterns in Java, XML, and Scala. Camel routes can be run in standalone applications or deployed to various containers.
SymfonyCon Madrid 2014 - Rock Solid Deployment of Symfony AppsPablo Godel
Web applications are becoming increasingly more complex, so deployment is not just transferring files with FTP anymore. We will go over the different challenges and how to deploy our PHP applications effectively, safely and consistently with the latest tools and techniques. We will also look at tools that complement deployment with management, configuration and monitoring.
The document discusses the past, present, and future of jBPM. It describes how jBPM has evolved from version 3.x, which had many API choices and implementation challenges, to version 4.x, which resolves past issues. It outlines migration scenarios between versions and resources for learning more about jBPM. It concludes by announcing a surprise at the upcoming JFall conference - that attendees should hug a jBPM developer.
The document discusses Drools and the JBoss Business Rules Management System (BRMS), including an overview of concepts like rules, facts, and the runtime execution environment. It also covers authoring rules with the guided rule editor in the web interface or with DRL, and integrating rules with Spring and Camel frameworks at runtime.
London Atlassian User Group - February 2014Steve Smith
Continuous deployment is causing organisations to rethink how they build and release software. Atlassian Bamboo is rapidly adding features to help with automating deployment, but there are a lot of other practical and organisational issues that need to be addressed when adopting this development model. The Atlassian business-platforms team has been dealing with these issues over the last few months as we transition our order system to continuous deployment. This talk will cover why we adopted this model, some of challenges we encountered, and the approaches and tools we used to overcome them.
Bootstrapping Puppet and Application Deployment - PuppetConf 2013Puppet
"Bootstrapping Puppet and Application Deployment" by Robert de Macedo Soares, Application Security Engineer, BusinessWire.
Presentation Overview: A dive into the problems faced when first launching Puppet across existing, heterogeneous servers, outlining possible solutions using our experience as an example. In addition, this session will touch on application management and deployment using subversion and rake tasks, what works and what is a little rough around the edges.
Speaker Bio: Robert is an engineer who has spent the past several years attempting to automate away the need for the work that he does. Focusing on server automation and security work for BusinessWire, Robert also develops web services such as tee.ms, a chat service, and designs and develops games. Trism, which he co-designed, was nominated for Cellular Game of the Year by the Academy of Interactive Arts & Sciences in the 2009 Interactive Achievement Awards.
This document provides an overview of profiling PHP applications for performance. It begins by discussing common myths about PHP optimizations that provide little real performance benefit. Effective profiling is based on measuring actual performance results using tools. The document outlines different profiling modes for normal development and emergency situations. It then describes various tools that can be used to profile different parts of a PHP application, including the browser, web server, PHP code, database, and operating system. It emphasizes finding and addressing bottlenecks. The document concludes by offering advice like avoiding premature optimization, understanding problems fully before attempting to fix them, and asking others for help.
Leonid Mazur has experience testing web applications and web services at companies like Yuzu, Become, Yahoo, Talent6, PayPal, and CafePress. He focuses on automated client side testing using Selenium and server side testing using Perl/Ruby scripts. For database testing, he writes scripts to verify data integrity and consistency with source feeds.
Puppet Camp Duesseldorf 2014: Martin Alfke - Can you upgrade to puppet 4.x?NETWAYS
PuppetLabs takes care on the Puppet software stack and they provide regular updates of their software.
But how about your Puppet DSL code? How can you ensure that your code will also work fine on newer Puppet versions?
This talks shows basic steps and actions which should be done to ensure fully functional Puppet DSL code on newer Puppet versions.
I will show common old practices, which have been replaced by more modern ways in using Puppet and how to migrate to the new solution. Additionally I want you to learn how you can test your Puppet DSL code prior putting it onto a new Puppet master.
Spot Trading - A case study in continuous delivery for mission critical finan...SaltStack
This is a presentation given by Jeremy Alons, Spot Trading, at the DevOps Summit Chicago in August 2014. Jeremy shares how Spot Trading does automated deployments for mission-critical financial services with a case study in continuous delivery.
Writing High-Performance Software by Arvid Norbergbittorrentinc
The document discusses techniques for writing high-performance software, focusing on optimizing memory access and reducing context switching. It covers CPU memory hierarchies, data structures, and socket programming. Some key points include organizing data sequentially in memory to improve cache hits, batching work to amortize context switching costs, and using asynchronous I/O to avoid blocking threads on disk or network operations.
Practical Continuous Deployment - Atlassian - London AUG 18 Feb 2014Matthew Cobby
The document discusses practical approaches to implementing continuous deployment. It describes converting an organization's internal systems to continuous delivery and deployment over six months to address integration issues. Continuous deployment aims to release features, not unfinished work, through automation that makes releasing repeatable. Stakeholders benefit from faster delivery of features to customers and clearer progress signals. The document outlines a development workflow involving tracking requests, branching per feature, automated testing, code reviews, merging to a release branch, and deploying to staging and production. It also addresses challenges of automation and coordination across servers for the "last mile" of deployment.
Migrating to a bazel based CI system: 6 learnings Or Shachar
Two years ago, we were given a big challenge - Transform Wix Build System, then based on Maven and Teamcity, to a new system that will support our exponentially growing scale.
But, how could we move to a system so different in so many ways than the existing one? Furthermore, we were required not to break the current build system, as we migrate to the new one.
Fast forward to today: Wix backend CI system is fully migrated to Bazel! The system builds in a fracture of the time - even with our largest codebases. In this talk, we will describe how we achieved this, why it took us so long, what tools we had to build on the way (and what we already have, and will, open source!), and share the principles that helped us.
Migrating to a Bazel-based CI System: 6 Learnings - Or ShacharWix Engineering
Two years ago, we were given a big challenge - Transform Wix Build System, then based on Maven and Teamcity, to a new system that will support our exponentially growing scale. Naturally, we chose Bazel.
But, how could we move to a system so different in so many ways than the existing one? Furthermore, we were required not to break the current build system, as we migrate to the new one.
Fast forward to today: Wix backend CI system is fully migrated to Bazel! The system builds in a fracture of the time - even with our largest codebases. In this talk, Or Shachar will describe how we achieved this, why it took us so long, what tools we had to build on the way (and what we already have, and will, open source!), and share the principles that helped us.
You can watch it here:
https://www.wix.engineering/post/bazelcon-2019-lessons-learned-from-migrating-our-build-system-to-bazel
Sascha Möllering discusses how his company moved from manual server setup and deployment to automated deployments using infrastructure as code and continuous delivery. They now deploy whenever needed using tools like Chef and JBoss to configure servers. Previously they faced challenges like manual processes, difficult rollbacks, and biweekly deployment windows. Now deployments are automated, safer, and can happen continuously.
The document discusses exploiting a parsing bug in Bash by combining it with CGI to gain remote code execution on a vulnerable system. It demonstrates using Bash variables containing payloads that are exported to a child shell via CGI, allowing the execution of arbitrary commands. Finally, it suggests using Netcat to create an interactive reverse shell backdoor on the target system without raising security flags.
This document provides an overview of Socorro, Mozilla's system for processing Firefox crash reports with Python. It describes the basic architecture, how a crash report moves through the system from collection to processing to storage in databases. It also discusses the scale of Socorro, currently processing over 2.5 million crash reports per day and storing over 110 terabytes of crash data. The document outlines Socorro's implementation including the various components, tools, and techniques used to manage complexity at this large scale.
Know More About Rational Performance - Snehamoy KRoopa Nadkarni
Rational Performance Tester (RPT) is a tool for performance testing web applications. It can simulate thousands of virtual users to test an application's performance and scalability. RPT works with many web application frameworks and protocols. It combines access to protocol data with the ability to insert custom Java code, enabling advanced test scenarios. RPT uses a distributed architecture where test agents inject load from separate machines while the Eclipse workbench is used for test creation and analysis. Proper configuration of workbench and agent machines is important for optimizing test performance.
Similar to Using Basho Bench to Load Test Distributed Applications (20)
Time Series data is proliferating with literally every step that we take, just think about things like Fit Bit bracelets that track your every move and financial trading data all of which is timestamped.
Time series data requires high performance reads and writes even with a huge number of data sources. Both speed and scale are integral to success, which makes for a unique challenge for your database.
A time series NoSQL data model requires flexibility to support unstructured, and semi-structured data as well as the ability to write range queries to analyze your time series data. So how can you tackle speed, scale and flexibility all at once?
Join Professional Services Architect Drew Kerrigan and Developer Advocate Matt Brender for a discussion of:
Examples of time series data sets, from IoT to Finance to jet engines
What makes time series queries different from other database queries
How to model your dataset to answer the right questions about your data
How to store, query and analyze a set of time series data points
Learn how a NoSQL database model and Riak TS can help you address the unique challenges of time series data.
1) Technology trends like big data, IoT, and hybrid cloud are allowing businesses to operate faster and more efficiently but require robust data management foundations.
2) As data and unstructured data grows exponentially, companies are moving to NoSQL databases that can better handle massive amounts of flexible data compared to traditional SQL databases.
3) Whitepages, which provides contact information for over 55 million monthly users, selected Basho Riak KV as their NoSQL database solution due to its high availability, scalability, fault tolerance, and operational simplicity.
The document discusses distributed database systems and properties of the Riak database. It defines distributed systems and discusses key aspects like availability, fault tolerance, and latency. It explains Riak's masterless architecture and how it provides high availability and scalability through horizontal scaling on commodity servers. The document also covers consistency models and how Riak allows tuning availability and consistency based on use cases.
The Boston Riak had Sean Kelly from Tapjoy digging into message queue infrastructure at the company. They process billions of requests a day and queuing is an important element of that scale.
To kick us off, we discussed the basics of message queues, distributed systems and why dual writes are evil. Here is that talk with a few links to get you started.
This is a presentation by Peter Coppola, VP of Product and Marketing at Basho Technologies and Matthew Aslett, Research Director at 451 Research. Join them as they discuss whether multi-model databases and polyglot persistence have increased operational complexity. They'll discuss the benefits and importance of NoSQL databases and how the Basho Data Platform helps enterprises leverage Big Data applications.
Here's a walkthrough of the set CRDT within Riak and a bucket strategy that leads to Riak being the best choice. You'll see that conflict is inevitable. The set bucket type allows developers to rely on eventually consistency adding up to the data set that we expect.
For more on sets and CRDTs see:
http://basho.com/distributed-data-types-riak-2-0/
http://basho.com/data-modeling-with-riak/
http://docs.basho.com/riak/latest/dev/using/data-types/
Here's an example of how to code with Riak using cURL and ruby to do a basic PUT, GET and more. We then index the data using Apache Solr integration.
No matter what platform we’re discussing, we’re beyond the view of rows and columns. Data is more diverse than ever. More difficult to parse. Here is some of that story.
This is a presentation given by Matt Brender (@mjbrender) at Big Data TechCon 2015.
In this class, we will discuss why companies choose Riak over a relational database with a specific focus on availability, scalability, and the key/value data model. We then analyze the decision points that should be considered when choosing a non-relational solution and review data modeling, querying, and consistency guarantees. Finally, we end with simple patterns for building common applications in Riak using its key/value design, dealing with data conflicts that emerge in an eventually consistent system, and discuss multi-datacenter replication.
Here is Matt Brender's presentation at Big Data TechCon centered on understanding how distributed systems play a role in Big Data.
Full description:
Whether you’re an experienced user of Hadoop or a recent convert to Spark, you recognize that data is powerful when stored and analyzed. Analysis, as a workload, can be contrasted with the initial creation and storage of that data. These “active” workloads are what generate the data we covet.
Understanding this persistence of data as workload requires an appreciation of distributed systems. We will explore what factors affect your choice in database technology and particularly how to prioritize the choice in core architectural underpinnings present in NoSQL designs. We will also explore what these technologies solve and suggestions for how to align them with your business objectives.
You’ll leave this session with an understanding of the basic principles of NoSQL architectural design and a deeper understanding of the considerations when identifying a persistence solution for your active workloads.
Basho and Riak at GOTO Stockholm: "Don't Use My Database."Basho Technologies
What are common use cases for NoSQL? When should I avoid NoSQL? When is RDBMS just fine?
This presentation, delivered at the GOTO NoSQL Roadshow events in London and Stockholm in November of 2011 by Basho co-founder and COO, Antony Falco, take a no-BS look at the tradeoffs one must make to gain the advantages offered by distributed databases like Riak.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
19. Benchmarking
Overview
•Determine your application’s needs
basho
20. Benchmarking
Overview
•Determine your application’s needs
•Test typical and worst-case scenarios
basho
21. Benchmarking
Overview
•Determine your application’s needs
•Test typical and worst-case scenarios
•Minimize variables changed between
scenarios
basho
22. Benchmarking
Overview
•Determine your application’s needs
•Test typical and worst-case scenarios
•Minimize variables changed between
scenarios
•Run early, run often
basho
26. Benchmarking Steps
•Start up your test cluster
•Configure a test
•Run the test against the cluster
basho
27. Benchmarking Steps
•Start up your test cluster
•Configure a test
•Run the test against the cluster
•Generate pretty graphs (requires R)
basho
28. Benchmarking Steps
•Start up your test cluster
•Configure a test
•Run the test against the cluster
•Generate pretty graphs (requires R)
•Interpret graphs, tweak, re-test
basho
30. basho_bench
•Dave “Dizzy” Smith’s experience testing
instant messaging and database systems
basho
31. basho_bench
•Dave “Dizzy” Smith’s experience testing
instant messaging and database systems
•Created for internal benchmarking
basho
32. basho_bench
•Dave “Dizzy” Smith’s experience testing
instant messaging and database systems
•Created for internal benchmarking
•Benchmark anything key/value-like
basho
33. basho_bench
•Dave “Dizzy” Smith’s experience testing
instant messaging and database systems
•Created for internal benchmarking
•Benchmark anything key/value-like
•Used heavily on innostore and bitcask
basho
34. basho_bench
•Dave “Dizzy” Smith’s experience testing
instant messaging and database systems
•Created for internal benchmarking
•Benchmark anything key/value-like
•Used heavily on innostore and bitcask
•Simple, extensible Erlang API
basho
58. Benchmarking is Hard
•Design an accurate test
•Tool and system limits
•Testing a multi-variate space
basho
59. Benchmarking is Hard
•Design an accurate test
•Tool and system limits
•Testing a multi-variate space
•Easy to take results out of context
basho
60. Benchmarking is Hard
•Design an accurate test
•Tool and system limits
•Testing a multi-variate space
•Easy to take results out of context
•Everything is relative!
basho
67. Conduct your own
•Take metrics from existing app
•Mixture of get/put/delete operations
basho
68. Conduct your own
•Take metrics from existing app
•Mixture of get/put/delete operations
•Value size distribution
basho
69. Conduct your own
•Take metrics from existing app
•Mixture of get/put/delete operations
•Value size distribution
•Key distribution
basho
70. Conduct your own
•Take metrics from existing app
•Mixture of get/put/delete operations
•Value size distribution
•Key distribution
•“Hot” and “cold” keys
basho
71. Conduct your own
•Take metrics from existing app
•Mixture of get/put/delete operations
•Value size distribution
•Key distribution
•“Hot” and “cold” keys
•Configure a test, run, RINSE & REPEAT
basho
72. Plug
Interested in learning about support,
consulting, or Enterprise features?
Email info@basho.com or go to
http://www.basho.com/contact.html to talk
with us.
www.basho.com
basho
Editor's Notes
Why do you benchmark
 - minimize suprises during deployment
 - test expected load
 - test peak load
 - see how much room there is to scale
 - find failure points - in your application or in the data store
Why do you benchmark
 - minimize suprises during deployment
 - test expected load
 - test peak load
 - see how much room there is to scale
 - find failure points - in your application or in the data store
Why do you benchmark
 - minimize suprises during deployment
 - test expected load
 - test peak load
 - see how much room there is to scale
 - find failure points - in your application or in the data store
Why do you benchmark
 - minimize suprises during deployment
 - test expected load
 - test peak load
 - see how much room there is to scale
 - find failure points - in your application or in the data store
Why do you benchmark
 - minimize suprises during deployment
 - test expected load
 - test peak load
 - see how much room there is to scale
 - find failure points - in your application or in the data store
Why do you benchmark
 - minimize suprises during deployment
 - test expected load
 - test peak load
 - see how much room there is to scale
 - find failure points - in your application or in the data store
Goals of benchmarking
 - Two basic kinds of tests
 - Throughput tests - how much can you throw at it
 - Latency tests - how well does it respond at a fixed load
Goals of benchmarking
 - Two basic kinds of tests
 - Throughput tests - how much can you throw at it
 - Latency tests - how well does it respond at a fixed load
Goals of benchmarking
 - Two basic kinds of tests
 - Throughput tests - how much can you throw at it
 - Latency tests - how well does it respond at a fixed load
Goals of benchmarking
 - Two basic kinds of tests
 - Throughput tests - how much can you throw at it
 - Latency tests - how well does it respond at a fixed load
Goals of benchmarking
 - Two basic kinds of tests
 - Throughput tests - how much can you throw at it
 - Latency tests - how well does it respond at a fixed load
Goals of benchmarking
 - Two basic kinds of tests
 - Throughput tests - how much can you throw at it
 - Latency tests - how well does it respond at a fixed load
Goals of benchmarking
 - Two basic kinds of tests
 - Throughput tests - how much can you throw at it
 - Latency tests - how well does it respond at a fixed load
Goals of benchmarking
 - Two basic kinds of tests
 - Throughput tests - how much can you throw at it
 - Latency tests - how well does it respond at a fixed load
Introduce basho_bench
 - Credit for Dizzy - created after years of experience testing instant messaging and
   database software.
 - Created for internal benchmarking
 - Designed for us to evaluate anything shaped like a key/value store
 - Used it heavily in our development and testing of innostore and bitcask.
 - Extensible
Introduce basho_bench
 - Credit for Dizzy - created after years of experience testing instant messaging and
   database software.
 - Created for internal benchmarking
 - Designed for us to evaluate anything shaped like a key/value store
 - Used it heavily in our development and testing of innostore and bitcask.
 - Extensible
Introduce basho_bench
 - Credit for Dizzy - created after years of experience testing instant messaging and
   database software.
 - Created for internal benchmarking
 - Designed for us to evaluate anything shaped like a key/value store
 - Used it heavily in our development and testing of innostore and bitcask.
 - Extensible
Introduce basho_bench
 - Credit for Dizzy - created after years of experience testing instant messaging and
   database software.
 - Created for internal benchmarking
 - Designed for us to evaluate anything shaped like a key/value store
 - Used it heavily in our development and testing of innostore and bitcask.
 - Extensible
Introduce basho_bench
 - Credit for Dizzy - created after years of experience testing instant messaging and
   database software.
 - Created for internal benchmarking
 - Designed for us to evaluate anything shaped like a key/value store
 - Used it heavily in our development and testing of innostore and bitcask.
 - Extensible
How basho bench works
 - event generator with weights
 - scheduler to trigger the events
 - worker threads calls driver to execute the events
 - key generators
 - value generators
How basho bench works
 - event generator with weights
 - scheduler to trigger the events
 - worker threads calls driver to execute the events
 - key generators
 - value generators
How basho bench works
 - event generator with weights
 - scheduler to trigger the events
 - worker threads calls driver to execute the events
 - key generators
 - value generators
How basho bench works
 - event generator with weights
 - scheduler to trigger the events
 - worker threads calls driver to execute the events
 - key generators
 - value generators
How basho bench works
 - event generator with weights
 - scheduler to trigger the events
 - worker threads calls driver to execute the events
 - key generators
 - value generators
How basho bench works
 - event generator with weights
 - scheduler to trigger the events
 - worker threads calls driver to execute the events
 - key generators
 - value generators
Say why this is a bad bad thing
 - benchmarks should be long running - make sure you saturate things
   otherwise you don't see the effect of a cache being filled, or pauses
   due to pages being flushedm, trees rebalanced etc.
Say why this is a bad bad thing
 - benchmarks should be long running - make sure you saturate things
   otherwise you don't see the effect of a cache being filled, or pauses
   due to pages being flushedm, trees rebalanced etc.
Say why this is a bad bad thing
 - benchmarks should be long running - make sure you saturate things
   otherwise you don't see the effect of a cache being filled, or pauses
   due to pages being flushedm, trees rebalanced etc.
Say why this is a bad bad thing
 - benchmarks should be long running - make sure you saturate things
   otherwise you don't see the effect of a cache being filled, or pauses
   due to pages being flushedm, trees rebalanced etc.
Say why this is a bad bad thing
 - benchmarks should be long running - make sure you saturate things
   otherwise you don't see the effect of a cache being filled, or pauses
   due to pages being flushedm, trees rebalanced etc.
Why benchmarking is hard
 - You will have to iterate a lot
 - Has the benchmarking tool hit the limit or is it the system under test?
 - Run a second copy from another server
Why benchmarking is hard
 - You will have to iterate a lot
 - Has the benchmarking tool hit the limit or is it the system under test?
 - Run a second copy from another server
Why benchmarking is hard
 - You will have to iterate a lot
 - Has the benchmarking tool hit the limit or is it the system under test?
 - Run a second copy from another server
Why benchmarking is hard
 - You will have to iterate a lot
 - Has the benchmarking tool hit the limit or is it the system under test?
 - Run a second copy from another server
Why benchmarking is hard
 - You will have to iterate a lot
 - Has the benchmarking tool hit the limit or is it the system under test?
 - Run a second copy from another server
Troublespots to look for
 - running out of file handles
 - running low on socket connections
 - swapping
Troublespots to look for
 - running out of file handles
 - running low on socket connections
 - swapping
Troublespots to look for
 - running out of file handles
 - running low on socket connections
 - swapping
Recommendations
 - Take metrics from your application - measure the mix of get/put/delete operations
 - Provide a map/reduce job to find out data distributions
Recommendations
 - Take metrics from your application - measure the mix of get/put/delete operations
 - Provide a map/reduce job to find out data distributions
Recommendations
 - Take metrics from your application - measure the mix of get/put/delete operations
 - Provide a map/reduce job to find out data distributions
Recommendations
 - Take metrics from your application - measure the mix of get/put/delete operations
 - Provide a map/reduce job to find out data distributions
Recommendations
 - Take metrics from your application - measure the mix of get/put/delete operations
 - Provide a map/reduce job to find out data distributions
Recommendations
 - Take metrics from your application - measure the mix of get/put/delete operations
 - Provide a map/reduce job to find out data distributions