This document discusses Exactpro's approach to testing exchange trading systems. It provides an overview of Exactpro as a company focused on functional and non-functional testing of financial market infrastructures. It then discusses Exactpro's testing approach, including creating load profiles, performance testing, resilience testing, automation of resilience testing, and defects mining from test data. Specific techniques covered include monitoring tools deployment, data collection, storage and analysis to identify issues and ensure system resilience.
Trading Clearing Systems Test AutomationIosif Itkin
The document describes Exactpro's recommended approach to test automation, which involves progressively building up test automation capabilities over 6 steps: 1) Testing server functionality via standard protocols, 2) 'GUI bypass' testing, 3) Connecting to the GUI, 4) Semi-automated GUI testing, 5) Fully automated GUI testing, and 6) Creating a 'Big Button' test framework. It then provides explanations and details of Exactpro's bespoke test automation tools that implement different aspects of this approach, including Sailfish, Shsha, ClearTH, and MiniRobots. Key principles for effective test automation design are also outlined.
Trading Systems: Testing at the Confluence of FT and NFTIosif Itkin
EXTENT Trading Technology Trends & Quality Assurance Conference in Obninsk, 2 March, 2013
Trading Systems:
Testing at the Confluence of FT & NFT
Alexey Zverev, Managing Director
Alyona Bulda, QA Project Lead
Ivan Bobrov, HFT Analyst
Modern business drivers are continually pushing to reduce the time it takes to get a product or service to market, reduce the risk and cost associated with that, and to improve quality.
In laboratories, delivering an analytical result that’s ‘right first time’ (RFT) is the answer. There is no reprocessing data or re-running injections and no out of specification (OOS) results or reporting/calculation errors.
Using chromatography data system tools for RFT analysis automatically gives high quality of results and confidence in results, lower cost of analysis, improved lab efficiency, and faster release to market and return on investment (ROI).
Introducing the principle of Operational Simplicity and how this is applied in the Chromeleon 7 user interface (Console/Studio, Categories, MiniPlots, Ribbons, Deleted Items)
Learn more about our chromatography data system: http://www.thermoscientific.com/en/about-us/general-landing-page/chromeleon-resource-center.html?ca=chromeleon
Thermo Scientific Chromeleon 7 CDS can streamline an entire enterprise chromatography laboratory. It features a client-server architecture that allows for centralized management, data storage, and security across the network. During network failures, it ensures continued operation and data security using a local secure data vault. The system integrates a variety of tools for administration, licensing, scheduling, and user management. It also provides integration capabilities with third-party software like LIMS.
Showing universal instrument control and ways to increase instrument uptime by getting more “right first time” analyses (instrument control, eWorkflows™, Smart Startup, SST/IRC)
Learn more about our Chromatography Data System Chromeleon:
http://www.thermoscientific.com/en/about-us/general-landing-page/chromeleon-resource-center.html?ca=chromeleon
Smarter workflows with thermo scientific chromeleon cdsOskari Aro
This document discusses features in Chromeleon CDS software for automating chromatography workflows, including:
- eWorkflows that automate sample analysis from start to finish in a customizable, multi-step process.
- Peak detection algorithms like Cobra and SmartPeaks that automatically integrate peaks, including for unresolved peaks.
- System Suitability Tests (SST) and Intelligent Run Control (IRC) that define test criteria during runs and allow pass/fail actions like automatic sample dilution.
Trading Clearing Systems Test AutomationIosif Itkin
The document describes Exactpro's recommended approach to test automation, which involves progressively building up test automation capabilities over 6 steps: 1) Testing server functionality via standard protocols, 2) 'GUI bypass' testing, 3) Connecting to the GUI, 4) Semi-automated GUI testing, 5) Fully automated GUI testing, and 6) Creating a 'Big Button' test framework. It then provides explanations and details of Exactpro's bespoke test automation tools that implement different aspects of this approach, including Sailfish, Shsha, ClearTH, and MiniRobots. Key principles for effective test automation design are also outlined.
Trading Systems: Testing at the Confluence of FT and NFTIosif Itkin
EXTENT Trading Technology Trends & Quality Assurance Conference in Obninsk, 2 March, 2013
Trading Systems:
Testing at the Confluence of FT & NFT
Alexey Zverev, Managing Director
Alyona Bulda, QA Project Lead
Ivan Bobrov, HFT Analyst
Modern business drivers are continually pushing to reduce the time it takes to get a product or service to market, reduce the risk and cost associated with that, and to improve quality.
In laboratories, delivering an analytical result that’s ‘right first time’ (RFT) is the answer. There is no reprocessing data or re-running injections and no out of specification (OOS) results or reporting/calculation errors.
Using chromatography data system tools for RFT analysis automatically gives high quality of results and confidence in results, lower cost of analysis, improved lab efficiency, and faster release to market and return on investment (ROI).
Introducing the principle of Operational Simplicity and how this is applied in the Chromeleon 7 user interface (Console/Studio, Categories, MiniPlots, Ribbons, Deleted Items)
Learn more about our chromatography data system: http://www.thermoscientific.com/en/about-us/general-landing-page/chromeleon-resource-center.html?ca=chromeleon
Thermo Scientific Chromeleon 7 CDS can streamline an entire enterprise chromatography laboratory. It features a client-server architecture that allows for centralized management, data storage, and security across the network. During network failures, it ensures continued operation and data security using a local secure data vault. The system integrates a variety of tools for administration, licensing, scheduling, and user management. It also provides integration capabilities with third-party software like LIMS.
Showing universal instrument control and ways to increase instrument uptime by getting more “right first time” analyses (instrument control, eWorkflows™, Smart Startup, SST/IRC)
Learn more about our Chromatography Data System Chromeleon:
http://www.thermoscientific.com/en/about-us/general-landing-page/chromeleon-resource-center.html?ca=chromeleon
Smarter workflows with thermo scientific chromeleon cdsOskari Aro
This document discusses features in Chromeleon CDS software for automating chromatography workflows, including:
- eWorkflows that automate sample analysis from start to finish in a customizable, multi-step process.
- Peak detection algorithms like Cobra and SmartPeaks that automatically integrate peaks, including for unresolved peaks.
- System Suitability Tests (SST) and Intelligent Run Control (IRC) that define test criteria during runs and allow pass/fail actions like automatic sample dilution.
Chromeleon CDS software now supports mass spectrometry (MS) instrument control and data processing, allowing laboratories to integrate MS into their chromatography data system (CDS) workflow. Key features include native MS instrument drivers for remote control and monitoring, MS-specific data organization and visualization tools, a suite of MS data processing tools including extracted ion chromatogram creation and library searching, and reporting objects tailored for MS data. The integrated CDS approach provides advantages like single software validation, enhanced data security, and use of Chromeleon's compliance and data processing features for MS data.
1. The document discusses tools in Thermo Scientific's Chromeleon Chromatography Data System for ensuring analytical results are "right first time" without needing reprocessing or re-running injections.
2. It describes the Sequence Ready Check, which verifies sequences will run correctly by checking for issues before and during runs, and the Instrument Method Check, which identifies issues with instrument methods. These help labs achieve more right first time analyses.
3. Smart Startup is also covered, which automates instrument initialization and equilibration to ensure the first injection is valid, reducing wasted time and resources compared to manual equilibration. It aims to remove subjectivity and aid in efficient method switching.
Reconciliation Testing Aspects of Trading Systems Software FailuresIosif Itkin
Предварительный сборник трудов 8-ого весеннего/летнего коллоквиума молодых исследователей в области программной инженерии (SYRCoSE 2014) в Санкт-Петербурге - ISBN 978-5-91474-020-4, c. 125-129
Anna-Maria Kriger, Kostroma State Technological University
Alyona Pochukalina, Obninsk Institute for Nuclear Power Engineering
Vladislav Isaev, Yuri Gagarin State Technical University of Saratov
Exactpro Systems
Flexible reporting tools (Report Designer, Find Variables, Check Report)
Learn more about our chromatography data system Chromeleon: http://www.thermoscientific.com/en/about-us/general-landing-page/chromeleon-resource-center.html?ca=chromeleon
The document discusses various technologies that can be used to implement risk controls for financial trading systems, including firewalls, microcontrollers, FPGAs, ASICs, and software-based approaches. It provides details on how each technology works and considers factors like costs, performance, and flexibility. Specific applications mentioned include options pricing, Monte Carlo simulations, and calculating risk exposure for portfolios. The document advocates for concurrent computing on GPUs to improve performance of computationally intensive financial calculations.
Access Assurance Suite Tips & Tricks - Lisa Lombardo Principal Architect Iden...Core Security
Everyone loves a good tip, like using toothpaste to clear up hazy car headlights. In this session, Identity users will learn from the master, our lead architect, Lisa Lombardo, as she goes through tips and tricks to make sure you’re getting the most out of your IAM deployment. Come with your questions about Core Access, Core Compliance, and Core Password.
1. Testing object-oriented programs presents unique challenges compared to procedural programs due to features like encapsulation, inheritance, and polymorphism. The basic unit of testing for OO programs is the class, not individual methods.
2. Inherited methods may need to be retested in subclasses to ensure correct behavior given the new context. Overridden methods also require retesting. Deep inheritance hierarchies can weaken encapsulation and reduce testability.
3. Encapsulation hinders testing by preventing access to attribute values, requiring workarounds like state reporting methods. Regression testing is especially important for OO code due to changes potentially affecting many subclasses.
The PAC aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing.
Since its beginning, the PAC is designed to connect performance experts during a single event. In June, during 24 hours, 20 participants convened exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
The document discusses the various types of testing required to ensure compliance with the Faster Payments Service (FPS), including system design, functional, regression, performance, and technical testing. It notes that FPS requires real-time decision making and transaction processing, precision is important, and both positive and negative scenarios need testing. Finally, it emphasizes that testing should be a comprehensive effort integrated throughout project development and that the FPS scheme has formal testing requirements that must be passed.
Dynamic data processing tools to minimize time spent on chromatogram review and integration (Dynamic Data Linking, SmartLink, Cobra, SmartPeaks).
Learn more about our chromatography data system Chromeleon: http://www.thermoscientific.com/en/about-us/general-landing-page/chromeleon-resource-center.html?ca=chromeleon
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2qoUklo.
Mark Price talks about techniques for making performance testing a first-class citizen in a Continuous Delivery pipeline. He covers a number of war stories experienced by the team building one of the world's most advanced trading exchanges. Filmed at qconlondon.com.
Mark Price is a Senior Performance Engineer at Improbable.io, working on optimizing and scaling reality-scale simulations. Previously, he worked as Lead Performance Engineer at LMAX Exchange, where he helped to optimize the platform to become one of the world's fastest FX exchanges.
This document outlines a project that tested the performance and power usage of applications running simultaneously on multicore processors. It discusses benchmarking tools like performance counters, PAPI, and the HPC Toolkit. Tests were run on AMD and Intel processors using C-Ray and Ramspeed applications pinned to specific cores. Control tests showed baseline performance for each application alone on each core. Interference tests examined increases in runtime, cache misses, and power when applications shared the processor. Results showed interference effects. Future work could test more applications and cores simultaneously to better understand multicore interference.
Training Webinar: Effective Platform Server MonitoringOutSystems
In this webinar we look at how to effectively implement good monitoring practices or your servers and applications.
Recorded webinar: https://www.outsystems.com/learn/courses/29/webinar-effective-platform-server-monitoring/
Free Online training: https://www.outsystems.com/learn/courses/
Follow us on Twitter http://www.twitter.com/OutSystemsDev
Like us on Facebook http://www.Facebook.com/OutSystemsDev
Architecting for the cloud storage build testLen Bass
This document discusses best practices for deploying applications to the cloud, including:
- Using a deployment pipeline with continuous integration, integration testing, and staging environments to minimize errors and delays.
- Managing versions and branches to prevent errors from multiple teams working simultaneously.
- Performing integration testing after each commit to catch errors early.
- Maintaining separate databases for different environments like test vs production.
- Using feature toggles to allow uncompleted code to be checked in without breaking builds.
- Performing staging tests using production data and load to thoroughly test before deployment.
The PAC aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing.
Since its beginning, the PAC is designed to connect performance experts during a single event. In June, during 24 hours, 20 participants convened exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
PCD – Process Control Daemon is a light-weight system level process manager for Embedded-Linux based projects (consumer electronics, network devices, etc.).
PCD starts, stops and monitors all the user space processes in the system, in a synchronized manner, using a textual configuration file.
PCD recovers the system in case of errors and provides useful and detailed debug information.
PCD – Process Control Daemon is a light-weight system level process manager for Embedded-Linux based projects (consumer electronics, network devices, etc.).
PCD starts, stops and monitors all the user space processes in the system, in a synchronized manner, using a textual configuration file.
PCD recovers the system in case of errors and provides useful and detailed debug information.
This document discusses non-functional testing approaches for financial markets software. It describes the structure of non-functional testing teams, how to prepare tests by configuring load injectors and defining load shapes, and the types of non-functional tests performed, including latency measurements, capacity tests, DLC testing, failover testing, and other approaches to evaluate system performance under stress conditions.
The differing ways to monitor and instrumentJonah Kowall
FullStack London July 15th, 2016
Monitoring is complicated, and in most organizations consists of far too many tools owned by many teams. These tools consist of monitoring tools each looking at a component myopically. These tools metrics and logs from devices and software emitting them. Increasingly modern companies are creating their own instrumentation, but there is a large base of generic instrumentation of software. Fixing monitoring issues requires people, process, and technology. In this talk we will cover many common issues seen in the real world. For example decisions on what should be monitored or collected from a technology and a business perspective. This requires process and coordination.
We will investigate what instrumentation is most scalable and effective across languages this includes the commonly used APIs and possibilities to capture data from common languages like Java, .NET and PHP, but we’ll also go into methods which work with Python, Node.js, and golang. We will cover browser and mobile instrumentation techniques. How these are done? which APIs are being used? What open source tools and frameworks can be leveraged? Most importantly how to coordinate and communicate requirements across your organization.
Attendees of this session will walk away with a clear understanding of:
What is instrumentation, and what do I instrument, collect, and store?
The understanding of overhead and how this can be accomplished on common software stacks?
How to work with application owners to collect business data.
How correlation works in custom open source or packaged monitoring tools.
Trading Day Logs Replay at TMPA-2014 (Trading Systems Testing)Iosif Itkin
The document discusses limitations of log replay testing for modern trading systems and introduces three test tools - Sailfish, Load Injector, and Mini-Robots. Sailfish is for functional testing via message injection. Load Injector performs non-functional load and stress testing. Mini-Robots enables testing scenarios requiring multiple participants. While these tools can recreate many failures, 100% accurate replay of a full trading day is challenging due to non-determinism and complexity of matching engine logic and interactions.
1) Exactpro is a specialist QA firm focused on testing financial systems that was acquired by the London Stock Exchange Group in 2015.
2) The London Stock Exchange Group is a leading international exchange group that traces its history back to 1698 and has over 5,500 employees.
3) Exactpro uses automated testing tools like Sailfish and ClearTH to test systems, as well as techniques like formal verification, crowd-sourced testing, and machine learning.
Chromeleon CDS software now supports mass spectrometry (MS) instrument control and data processing, allowing laboratories to integrate MS into their chromatography data system (CDS) workflow. Key features include native MS instrument drivers for remote control and monitoring, MS-specific data organization and visualization tools, a suite of MS data processing tools including extracted ion chromatogram creation and library searching, and reporting objects tailored for MS data. The integrated CDS approach provides advantages like single software validation, enhanced data security, and use of Chromeleon's compliance and data processing features for MS data.
1. The document discusses tools in Thermo Scientific's Chromeleon Chromatography Data System for ensuring analytical results are "right first time" without needing reprocessing or re-running injections.
2. It describes the Sequence Ready Check, which verifies sequences will run correctly by checking for issues before and during runs, and the Instrument Method Check, which identifies issues with instrument methods. These help labs achieve more right first time analyses.
3. Smart Startup is also covered, which automates instrument initialization and equilibration to ensure the first injection is valid, reducing wasted time and resources compared to manual equilibration. It aims to remove subjectivity and aid in efficient method switching.
Reconciliation Testing Aspects of Trading Systems Software FailuresIosif Itkin
Предварительный сборник трудов 8-ого весеннего/летнего коллоквиума молодых исследователей в области программной инженерии (SYRCoSE 2014) в Санкт-Петербурге - ISBN 978-5-91474-020-4, c. 125-129
Anna-Maria Kriger, Kostroma State Technological University
Alyona Pochukalina, Obninsk Institute for Nuclear Power Engineering
Vladislav Isaev, Yuri Gagarin State Technical University of Saratov
Exactpro Systems
Flexible reporting tools (Report Designer, Find Variables, Check Report)
Learn more about our chromatography data system Chromeleon: http://www.thermoscientific.com/en/about-us/general-landing-page/chromeleon-resource-center.html?ca=chromeleon
The document discusses various technologies that can be used to implement risk controls for financial trading systems, including firewalls, microcontrollers, FPGAs, ASICs, and software-based approaches. It provides details on how each technology works and considers factors like costs, performance, and flexibility. Specific applications mentioned include options pricing, Monte Carlo simulations, and calculating risk exposure for portfolios. The document advocates for concurrent computing on GPUs to improve performance of computationally intensive financial calculations.
Access Assurance Suite Tips & Tricks - Lisa Lombardo Principal Architect Iden...Core Security
Everyone loves a good tip, like using toothpaste to clear up hazy car headlights. In this session, Identity users will learn from the master, our lead architect, Lisa Lombardo, as she goes through tips and tricks to make sure you’re getting the most out of your IAM deployment. Come with your questions about Core Access, Core Compliance, and Core Password.
1. Testing object-oriented programs presents unique challenges compared to procedural programs due to features like encapsulation, inheritance, and polymorphism. The basic unit of testing for OO programs is the class, not individual methods.
2. Inherited methods may need to be retested in subclasses to ensure correct behavior given the new context. Overridden methods also require retesting. Deep inheritance hierarchies can weaken encapsulation and reduce testability.
3. Encapsulation hinders testing by preventing access to attribute values, requiring workarounds like state reporting methods. Regression testing is especially important for OO code due to changes potentially affecting many subclasses.
The PAC aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing.
Since its beginning, the PAC is designed to connect performance experts during a single event. In June, during 24 hours, 20 participants convened exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
The document discusses the various types of testing required to ensure compliance with the Faster Payments Service (FPS), including system design, functional, regression, performance, and technical testing. It notes that FPS requires real-time decision making and transaction processing, precision is important, and both positive and negative scenarios need testing. Finally, it emphasizes that testing should be a comprehensive effort integrated throughout project development and that the FPS scheme has formal testing requirements that must be passed.
Dynamic data processing tools to minimize time spent on chromatogram review and integration (Dynamic Data Linking, SmartLink, Cobra, SmartPeaks).
Learn more about our chromatography data system Chromeleon: http://www.thermoscientific.com/en/about-us/general-landing-page/chromeleon-resource-center.html?ca=chromeleon
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2qoUklo.
Mark Price talks about techniques for making performance testing a first-class citizen in a Continuous Delivery pipeline. He covers a number of war stories experienced by the team building one of the world's most advanced trading exchanges. Filmed at qconlondon.com.
Mark Price is a Senior Performance Engineer at Improbable.io, working on optimizing and scaling reality-scale simulations. Previously, he worked as Lead Performance Engineer at LMAX Exchange, where he helped to optimize the platform to become one of the world's fastest FX exchanges.
This document outlines a project that tested the performance and power usage of applications running simultaneously on multicore processors. It discusses benchmarking tools like performance counters, PAPI, and the HPC Toolkit. Tests were run on AMD and Intel processors using C-Ray and Ramspeed applications pinned to specific cores. Control tests showed baseline performance for each application alone on each core. Interference tests examined increases in runtime, cache misses, and power when applications shared the processor. Results showed interference effects. Future work could test more applications and cores simultaneously to better understand multicore interference.
Training Webinar: Effective Platform Server MonitoringOutSystems
In this webinar we look at how to effectively implement good monitoring practices or your servers and applications.
Recorded webinar: https://www.outsystems.com/learn/courses/29/webinar-effective-platform-server-monitoring/
Free Online training: https://www.outsystems.com/learn/courses/
Follow us on Twitter http://www.twitter.com/OutSystemsDev
Like us on Facebook http://www.Facebook.com/OutSystemsDev
Architecting for the cloud storage build testLen Bass
This document discusses best practices for deploying applications to the cloud, including:
- Using a deployment pipeline with continuous integration, integration testing, and staging environments to minimize errors and delays.
- Managing versions and branches to prevent errors from multiple teams working simultaneously.
- Performing integration testing after each commit to catch errors early.
- Maintaining separate databases for different environments like test vs production.
- Using feature toggles to allow uncompleted code to be checked in without breaking builds.
- Performing staging tests using production data and load to thoroughly test before deployment.
The PAC aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing.
Since its beginning, the PAC is designed to connect performance experts during a single event. In June, during 24 hours, 20 participants convened exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
PCD – Process Control Daemon is a light-weight system level process manager for Embedded-Linux based projects (consumer electronics, network devices, etc.).
PCD starts, stops and monitors all the user space processes in the system, in a synchronized manner, using a textual configuration file.
PCD recovers the system in case of errors and provides useful and detailed debug information.
PCD – Process Control Daemon is a light-weight system level process manager for Embedded-Linux based projects (consumer electronics, network devices, etc.).
PCD starts, stops and monitors all the user space processes in the system, in a synchronized manner, using a textual configuration file.
PCD recovers the system in case of errors and provides useful and detailed debug information.
This document discusses non-functional testing approaches for financial markets software. It describes the structure of non-functional testing teams, how to prepare tests by configuring load injectors and defining load shapes, and the types of non-functional tests performed, including latency measurements, capacity tests, DLC testing, failover testing, and other approaches to evaluate system performance under stress conditions.
The differing ways to monitor and instrumentJonah Kowall
FullStack London July 15th, 2016
Monitoring is complicated, and in most organizations consists of far too many tools owned by many teams. These tools consist of monitoring tools each looking at a component myopically. These tools metrics and logs from devices and software emitting them. Increasingly modern companies are creating their own instrumentation, but there is a large base of generic instrumentation of software. Fixing monitoring issues requires people, process, and technology. In this talk we will cover many common issues seen in the real world. For example decisions on what should be monitored or collected from a technology and a business perspective. This requires process and coordination.
We will investigate what instrumentation is most scalable and effective across languages this includes the commonly used APIs and possibilities to capture data from common languages like Java, .NET and PHP, but we’ll also go into methods which work with Python, Node.js, and golang. We will cover browser and mobile instrumentation techniques. How these are done? which APIs are being used? What open source tools and frameworks can be leveraged? Most importantly how to coordinate and communicate requirements across your organization.
Attendees of this session will walk away with a clear understanding of:
What is instrumentation, and what do I instrument, collect, and store?
The understanding of overhead and how this can be accomplished on common software stacks?
How to work with application owners to collect business data.
How correlation works in custom open source or packaged monitoring tools.
Trading Day Logs Replay at TMPA-2014 (Trading Systems Testing)Iosif Itkin
The document discusses limitations of log replay testing for modern trading systems and introduces three test tools - Sailfish, Load Injector, and Mini-Robots. Sailfish is for functional testing via message injection. Load Injector performs non-functional load and stress testing. Mini-Robots enables testing scenarios requiring multiple participants. While these tools can recreate many failures, 100% accurate replay of a full trading day is challenging due to non-determinism and complexity of matching engine logic and interactions.
1) Exactpro is a specialist QA firm focused on testing financial systems that was acquired by the London Stock Exchange Group in 2015.
2) The London Stock Exchange Group is a leading international exchange group that traces its history back to 1698 and has over 5,500 employees.
3) Exactpro uses automated testing tools like Sailfish and ClearTH to test systems, as well as techniques like formal verification, crowd-sourced testing, and machine learning.
ClearTH Test Automation Framework: Case Study in IRS & CDS Swaps Lifecycle Mo...Iosif Itkin
Synchronize Europe
18th June 2019
Iosif Itkin, co-CEO and co-founder, Exactpro
Using the ISDA CDM Swaps application, simultaneously execute multiple end-to-end scenarios for DAML applications in capital markets - validate with actual contract data on ledger.
Performance testing is one of the kinds of Non-Functional Testing. Building any successful product hinges on its performance. User experience is the deciding unit of fruitful application and Performance testing helps to reach there. You will learn the key concept of performance testing, how the IT industry gets benefitted, what are the different types of Performance Testing, their lifecycle, and much more.
Rational Developer for z Systems and Rational Integration Tester can be used to test mainframe applications with and without live data. They allow developing and testing applications in isolation using database and program stubs to virtualize interactions with DB2 and CICS. This reduces wait times and allows testing early in the development cycle. Benefits include increased productivity, quality, and reduced risk through decoupling of delivery schedules.
This document introduces a self-service metadata driven data loading platform developed by Walmart to simplify and optimize the process of onboarding and running data applications. The key components of the platform include a centralized metadata store, connectors to integrate various data sources and targets, an orchestrator to build optimized execution plans, a schedule optimizer to prioritize jobs, and telemetry dashboards for monitoring. The goal of the platform is to dramatically increase developer productivity, provide a low-code experience, and intelligently manage resources and job scheduling across applications.
Performance Tuning Oracle Weblogic Server 12cAjith Narayanan
The document summarizes techniques for monitoring and tuning Oracle WebLogic server performance. It discusses monitoring operating system metrics like CPU, memory, network and I/O usage. It also covers monitoring and tuning the Java Virtual Machine, including garbage collection. Specific tools are outlined for monitoring servers like the WebLogic admin console, and command line JVM tools. The document provides tips for configuring domain and server parameters to optimize performance, including enabling just-in-time starting of internal applications, configuring stuck thread handling, and setting connection backlog buffers.
Using JMeter and Google Analytics for Software Performance TestingXBOSoft
Ed Curran, VP of Engineering at XBOSoft, shares some of his hands on experience in working with JMeter for load and performance testing. In the webinar, he provided explanations of different types of performance testing and how you can use Google Analytics to understand what users are really doing on your web apps and then how to leverage JMeter and analyze the results to improve your app's performance.
Big Data Berlin v8.0 Stream Processing with Apache Apex Apache Apex
This document discusses Apache Apex, an open source stream processing framework. It provides an overview of stream data processing and common use cases. It then describes key Apache Apex capabilities like in-memory distributed processing, scalability, fault tolerance, and state management. The document also highlights several customer use cases from companies like PubMatic, GE, and Silver Spring Networks that use Apache Apex for real-time analytics on data from sources like IoT sensors, ad networks, and smart grids.
Thomas Weise, Apache Apex PMC Member and Architect/Co-Founder, DataTorrent - ...Dataconomy Media
Thomas Weise, Apache Apex PMC Member and Architect/Co-Founder of DataTorrent presented "Streaming Analytics with Apache Apex" as part of the Big Data, Berlin v 8.0 meetup organised on the 14th of July 2016 at the WeWork headquarters.
20 Simple Questions from Exactpro for Your Enjoyment This Holiday SeasonIosif Itkin
Warmest wishes for a happy holiday season and a wonderful New Year!
We look forward to our continued collaboration in 2020. Thank you for your support.
This document describes a metadata-driven data loading framework that aims to simplify and optimize the onboarding of data applications at Walmart. The key points are:
1) The framework provides a centralized platform with plug-and-play onboarding capabilities to abstract away the complexities of integrating various data sources, sinks, and processors.
2) It utilizes metadata to configure applications and optimize resource allocation and scheduling based on priority. Connectors provide ready-to-use integrations and custom SQL UDFs allow flexible querying.
3) An orchestrator builds optimized execution plans and schedules application runs, while a scheduler optimizer prioritizes high-priority applications by dequeuing lower-priority jobs if needed.
Iosif Itkin - Network models for exchange trade analysisAIST
The document discusses software testing tools from Exactpro Systems for validating trading systems and ensuring data reconciliation. It introduces several tools the company offers: ClearTH for post-trade testing; MiniRobots for multi-threaded Java testing; Dolphin for market surveillance testing; Shsha for post-transactional analysis; Load Injector for load testing; and Sailfish for end-to-end testing. It also provides background on software quality assurance processes and examples of financial technology failures like the 2012 Knight Capital incident and issues with Facebook's NASDAQ IPO cross.
ATAGTR2017 Unified APM: The new age performance monitoring for production sys...Agile Testing Alliance
The presentation on Unified APM: The new age performance monitoring for production systems was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Kaushik Raghavan
Windows 7 client performance talk - Jeff StokesJeff Stokes
This document provides an overview of tools for troubleshooting Windows 7 client performance issues. It discusses Task Manager and Resource Monitor for monitoring system performance and processes. It also covers the Windows Performance Toolkit (Xperf) for tracing applications and the boot process. Other tools covered include the Windows Recovery Environment, Problem Steps Recorder, and Msconfig for troubleshooting startup issues.
According to service scale, there are hundreds or thousands of running containers in your service. Should we monitor each container by microscope or monitor each microservice by magnifier? This depends which granularity can help us find and solve the problems. In this sharing, I will introduce how to use cAdvisor, Icinga2, InfluxDB and Grafana to build a self-hosted monitoring system. In addition, I also discuss with how to embrace open source and share some practical experiences.
Choosing the Best Approach for Monitoring Citrix User Experience: Should You ...eG Innovations
A great user experience is key for the success of any Citrix application virtualization or desktop virtualization initiative. To ensure user satisfaction and productivity, Citrix administrators should monitor the user experience proactively, detect times when users are likely to be seeing slowness, pinpoint the cause of such issues and initiate corrective actions to quickly resolve issues, thereby ensuring user satisfaction and productivity.
A key question is where should the monitoring of the Citrix infrastructure be performed from - the network, the server infrastructure, or from the client?
View this presentation to:
• Learn about the different approaches to Citrix user experience monitoring, their benefits and shortcomings
• Hear about a hybrid approach that provides the most cost-effective yet comprehensive monitoring for a Citrix server farm
• See a live demonstration of the hybrid Citrix monitoring approach and its ability to cover all aspects of Citrix user experience
For more than 25 years, Applied Systems has been engaged in international projects devoted to the development of high-end measurement and test systems as well as customizable visualization software.
Our profound experience in industrial automation along with proven development techniques allow us to create solutions that are tailored to meet every client’s need.
Similar to Defects mining in exchanges - medvedev, klimakov, yamkovi (20)
Blockchain technology-in-fin tech - Anton SitnikovDataFest Tbilisi
- Exactpro is a specialist firm focused on functional and non-functional testing of exchanges, clearing houses, and other financial market infrastructures. It was founded in 2009 and now employs 550 specialists.
- The document discusses Exactpro's software testing services for mission critical financial technology and clients regulated by financial authorities. It also provides an overview of Corda, a distributed ledger platform, covering nodes, identities, states, transactions, and more.
- The summary highlights Exactpro's business, services testing financial technology, and introduces Corda as a topic covered in the document.
Using frictionless data to improve data quality - Jo BarrattDataFest Tbilisi
The document discusses using Frictionless Data to improve data quality by introducing the Data Package format which packages tabular data along with a schema to enable tool integration and interoperability. It promotes using tools like Goodtables for data validation either through a command line interface or web interface to continuously check data quality. The document also provides information on the Frictionless Data community and resources for learning more.
The document summarizes a workshop on using Datawrapper to create effective data visualizations. It discusses dos and don'ts of data visualization design, such as choosing charts that improve readability, using visual elements to make statements pop out, and showing nuance in data. The workshop demonstrates how to build charts and maps on the Datawrapper platform and introduces the company's team.
This document provides tips for securing digital devices, data, communications, and accounts. It recommends enabling passwords and screen locks on devices, encrypting data and backups, using HTTPS, VPNs and privacy-focused tools for communications, and employing unique, strong passwords stored in a password manager for accounts. The key aspects covered are requiring authentication to access devices and files, encrypting information both in transit and at rest, being selective about what services have access to personal data, and using passwords that are long and unique between accounts.
R package development, create your own package isabella golliniDataFest Tbilisi
This document provides an introduction to creating an R package. It discusses preliminaries like where R packages come from and where they are stored on a user's computer. The document explains the development workflow for a package, including using devtools to load and test code. It demonstrates creating a basic package using usethis and modifying sample code from an existing package to add a character and test the change. The goal is to get users comfortable with the basic process and differences compared to regular script development in R.
R package development, create package documentation isabella golliniDataFest Tbilisi
This document provides an overview of creating package documentation using roxygen2 and R markdown. It discusses writing function documentation with roxygen2 comments, previewing documentation locally, and creating package documentation like vignettes using R markdown. The document demonstrates the documentation workflow and encourages documenting other objects like data and classes. It also introduces creating package websites using pkgdown to showcase the package documentation.
Open data for social impact and better decision making - Denis GurskyDataFest Tbilisi
The document discusses civic tech and how to get startup projects funded. It describes civic tech as using technology and open data to tackle social problems and make better government decisions. It then provides examples of Ukrainian startups in areas like agriculture, law, and transportation that have received funding. The document advises structuring projects around UN Sustainable Development Goals and impact metrics to appeal to impact investors. It provides a template for pitching projects, highlighting goals, target audiences, and expected social and financial returns.
The document provides an overview of machine learning for sequences and natural language processing tasks. It discusses fundamentals of representing text as sequences, applications of sequence-to-sequence models like machine translation and transliteration, and challenges like ambiguity, noisy data, and evaluating generated sequences. It also describes a lab on character-level neural machine translation with Fairseq and issues with current approaches like lack of understanding of when models are wrong.
How to win a machine learning competition pavel pleskovDataFest Tbilisi
This document provides tips for winning machine learning competitions on Kaggle from a Kaggle Grandmaster. It discusses choosing the right competition based on factors like dataset size and number of participants. It also offers strategies like using specialized machine learning software and hardware, collaborating on teams, leveraging data leakages, and ensemble methods like stacking. The document emphasizes the benefits of competitions for rapidly advancing skills and building experience and portfolios, as well as some of the cons like the significant time commitment required.
This document discusses data analysis in life sciences. It begins by outlining machine learning applications in areas like molecular biology and precision medicine. Various types of biological and biomedical data that can be collected are described, including genomic, epigenomic, gene expression, proteomic, metabolomic, and single-cell data. Common data science tasks in life sciences like diagnosis, prognosis, clustering, and network reconstruction are also outlined. The document then discusses challenges around open data in life sciences, providing examples of clinical trial repositories and standards for sharing molecular data. Finally, it briefly introduces some popular online tools and repositories for analyzing and sharing genomic, proteomic, clinical, and pathway data.
After David Bowie died in early 2016, London-based designer Valentina D’Efilippo and British data journalist Miriam Quick decided to pay tribute by turning one of his best-known songs, Space Oddity, into data visualization. The result was Oddityviz, a collection of ten engraved records visualizing data from the song. Each 12-inch disc deconstructs the track in a different way: melodies, harmonies, lyrics, structure and story are transformed into new visual systems. The records are accompanied by a series of matching posters and a moving image piece that visualizes the music in real time. Oddityviz was exhibited at W+K London in January 2017. Miriam will talk about how they created the project and what they learned along the way.
Can data journalism save us from fake news? by Rayna BreuerDataFest Tbilisi
This document discusses how data journalism can help combat fake news. It notes that fake news is often repeated in headlines, teasers and videos to seem more credible. The document advocates keeping charts and visualizations simple and learning basic math and statistics skills. However, it also notes that data journalism alone has not solved the problem of fake news, as the underlying causes still need to be addressed.
Losing my favourite game: how journalists are not catching up with open data ...DataFest Tbilisi
Slowly dying open data portals and apps, mistakes in data interpretation and collaborations that never happened - I have now a sad collection of missed opportunities for open data in journalism, based on my research of open data investigative journalism in Russia and experience in teaching data journalism in Western Balkans and Central Asia. Let’s talk about the global trends and regional specifics of not using open data to its full potential, and discuss what are the ways forward to overcome the barriers. Growing a healthy community of open data users in the region and engaging with global data communities are surely first on the list.
There is plenty of data in the world, but it’s not always easy to make sense of it. That’s what data visualization is trying to solve, to see the stories behind the data and make stories visible. Information design translates those stories into a more universal - visual language, that can be perceived and felt easily. What are the challenges that information designers are encountering when creating visualizations to enable people to understand, feel and care about the issues and stories that are hidden into the numbers? How do they combine principles of graphic design and data understanding skills to enrich the data with visualizations.
Feeling the data: how to build stories that people care about by Thomas burnsDataFest Tbilisi
Data can be powerful building blocks for storytelling, but facts and figures alone cannot make a good story. How can we transform data into stories that give life and longevity to the ideas we are trying to communicate? How can we turn hard science into engaging, emotional experiences for our audiences? Why is this important? Join story producer Thomas Burns as he walks us through the elements of strong storytelling and describes why building strong narrative is critical for realizing the full potential of our message.
Stories vs Narratives. Using data for good. by Jakub GornickiDataFest Tbilisi
For a long time we were focused on providing more and more data(sets) with hope that citizens will reach to the source and magic will happen. It didn't. We left a space open for narratives who only use data to serve the goal they want to prove rather then to seek the truth. What should data people do? How to combine stories with data? Why care not only about the source but the outcome? And how not to narrative giver but a storyteller.
The Power of Open Source Investigation by Christiaan triebertDataFest Tbilisi
What can journalists and regular citizens do to investigate governments and armed groups who don't or hardly provide any information about incidents, bombings, tortures or corruption? A growing number of citizens are pursuing facts themselves. Bellingcat, an international investigative collective, uses online open source information in combination with digital tools to uncover the facts themselves. How do they work, and which tools and methods do they use? In this short talk, the audience will be shown the power of open source investigation.
Open data: for eveyone by everyone by Jason addieDataFest Tbilisi
The document discusses the benefits of open data and argues that development agencies and NGOs should release their data to the public. It notes that 250 years ago Sweden created the first freedom of information legislation, and currently 100 countries have similar laws. Open data can result in increased public sector efficiency, participation, and new jobs and profits. However, achieving open data is difficult. The document recommends that donors create a portal to store grantee data and establish an organization to educate, organize, and maintain publicly available data in order to maximize its benefits and avoid data being lost.
Open Data Science: beyond traditional scientific communities by Alexey natekinDataFest Tbilisi
The speaker will share their experience on how to foster Data Science within local communities across the globe, and how they can consistently develop world’s leading expertise in Data Science. These local communities are all enthusiasm-driven, with core value as the free open scientific and engineering knowledge for everyone. In particular, the speaker will talk about different types of events one can setup, beyond traditional meetups. One such event series called ml trainings help them regularly beat everyone on kaggle: 15 members of Open Data Science are in kaggle’s top-100. Science, Drinking, Rock-n-roll.
Discovering Digital Process Twins for What-if Analysis: a Process Mining Appr...Marlon Dumas
This webinar discusses the limitations of traditional approaches for business process simulation based on had-crafted model with restrictive assumptions. It shows how process mining techniques can be assembled together to discover high-fidelity digital twins of end-to-end processes from event data.
Enhanced data collection methods can help uncover the true extent of child abuse and neglect. This includes Integrated Data Systems from various sources (e.g., schools, healthcare providers, social services) to identify patterns and potential cases of abuse and neglect.
Generative Classifiers: Classifying with Bayesian decision theory, Bayes’ rule, Naïve Bayes classifier.
Discriminative Classifiers: Logistic Regression, Decision Trees: Training and Visualizing a Decision Tree, Making Predictions, Estimating Class Probabilities, The CART Training Algorithm, Attribute selection measures- Gini impurity; Entropy, Regularization Hyperparameters, Regression Trees, Linear Support vector machines.
We are pleased to share with you the latest VCOSA statistical report on the cotton and yarn industry for the month of May 2024.
Starting from January 2024, the full weekly and monthly reports will only be available for free to VCOSA members. To access the complete weekly report with figures, charts, and detailed analysis of the cotton fiber market in the past week, interested parties are kindly requested to contact VCOSA to subscribe to the newsletter.
Defects mining in exchanges - medvedev, klimakov, yamkovi
1. Build Software to Test Software
exactpro.com
Defects mining in Exchange trading systems
08/11/2018
Pavel Medvedev, Stanislav Klimakov, Mikhail Yamkovy
2. 2 Build Software to Test Software exactpro.com
Contents
- Exactpro company overview
- Intro into trading Exchange systems
- Testing approach
- Creating and handling load profile
- Performance testing
- Resilience testing
- Resilience in market infrastructures
- Automation of resilience testing
- Defects Mining in test data
- Challenges of proprietary software testing in the client’s environment
- Monitoring tools deployment
- Data collection
- Data storage and analysis
3. 3 Build Software to Test Software exactpro.com
EXACTPROBuild Software to Test Software
• A specialist firm focused on functional and non-functional testing of exchanges,
clearing houses, depositories and other market infrastructures
• Incorporated in 2009 with 10 people, our
company has experienced significant growth as
satisfied clients require more services; now
employing 550 specialists.
• Part of London Stock Exchange Group (LSEG) from May 2015 till January
2018. Exactpro management buyout from LSEG in January 2018.
• We provide software testing services for mission critical
technology that underpins global financial markets. Our clients
are regulated by FCA, Bank of England and their counterparts
from other countries.
4. 4 Build Software to Test Software exactpro.com
We have a global software Quality Assurance client network
5. 5 Build Software to Test Software exactpro.com
Trading systems types
Proprietary Trading &
HFT
Brokerage Execution
Venue
6. 6 Build Software to Test Software exactpro.com
Typical requirements for Exchange system
● Daily capacity - 200+ mln transactions
● Peak rates - 40,000 transactions per second
● Average round-trip latency - dozens of microseconds
● Availability - 100%
7. 7 Build Software to Test Software exactpro.com
Typical requirements for Exchange system
Daily capacity - 100+ mln transactions
Peak rates - 40k+ transactions per second
Average round-trip latency - <100 microseconds
Availability - 100%
3000 trx 2.5 cm <1 mm
15. 15 Build Software to Test Software exactpro.com
Test results analysis
Do we actually send what we thought we send?
• Evaluation of message rate ‘per millisecond’ unit and order mix balance:
Message rate per millisecond:
• Internal monitoring stats arbitration:
- Matching Engine’s NEW_ORDERS, CANCELS, AMENDS, etc – rates per second and total amount of transactions
MatchingEngine | NEW | Total=11896058 (2608833,3126952,3532034,2628239), Current=430 (85,103,141,101), Peak=2728 (721,661,746,600)
MatchingEngine | AMEND | Total=45509 (9493,12145,13535,10336), Current=1 (0,0,1,0), Peak=11 (5,5,6,5)
MatchingEngine | CANCEL | Total=9350063 (1957683,2492535,2784674,2115171), Current=357 (72,83,115,87), Peak=2086 (400,565,627,494)
Number of msgs per
millisecond
% Samples
Inbound (into System) Outbound (from System)
<5 55.64% 55.01%
5-8 3.67% 4.05%
8-10 2.60% 2.77%
10-15 5.32% 5.39%
15-20 5.88% 5.95%
20-80 26.85% 26.78%
>80 0.05% 0.05%
Partition 1
Message
Type
ME cores
Total
0 1 2 3
Order 3.74% 3.02% 2.00% 4.14% 12.89%
Cancel 3.56% 2.89% 1.93% 4.02% 12.39%
Amend 0.60% 0.53% 0.34% 0.68% 2.16%
Quote 0.32% 0.11% 0.16% 0.27% 0.85%
Trades 0.24% 0.18% 0.13% 0.29% 0.84%
16. 16 Build Software to Test Software exactpro.com
Latency end-to-end
% avg max
100 82 518
99.99 82 408
99.9 82 139
99 80 103
Latency percentiles:
17. 17 Build Software to Test Software exactpro.com
Daily life cycle
• DLC test
The test executed in conjunction with Functional test team.
– Pass system through Production like schedule:
• All trading cycles
• All scheduled sessions
– Apply appropriate load during various phases
– Perform some functional tests under load
– Data consistency check
• reconcile output from various sources
• check data for integrity
18. 18 Build Software to Test Software exactpro.com
Other Non-Functional tests
• Rapid user actions tests (connect-disconnect, logon-logout)
– System should sustain against such user behavior
– HW resources consumption should not grow up
• Slow consumer tests
– System should handle such users and should has a protection against them
– HW resources consumption should not grow up
• Intensive usage of recovery channels
– System should be able to handle high number of requests on recovery channels and should be able satisfy them
• Massive actions from Market Operations (mass order cancels, mass trade cancels, mass
instrument halts)
– System should handle Market operations’ actions like mass cancel of 10k active orders or trades.
• Resilience tests
19. Build Software to Test Software
exactpro.com
Defects mining in Exchange trading systems
08/11/2018
Pavel Medvedev, Stanislav Klimakov, Mikhail Yamkovy
21. 21 Build Software to Test Software exactpro.com
Financial infrastructures
• Exchanges
• Broker systems
• Clearing agencies
• Ticker plants
• Surveillance systems
Risks associated with financial infrastructure outage:
• Lost profit
• Data loss
• Damaged reputation
22. 22 Build Software to Test Software exactpro.com
Distributed high-performance computing
• Bare-metal servers (no virtualization)
• Horizontal scalability
• Redundancy (absence of single point of failure)
23. 23 Build Software to Test Software exactpro.com
Resilience tests
● Hardware outages
○ Network equipment failovers (Switches, Ports, Network adapters)
○ Server isolations
● Software outages
○ Simulation of various outage types (SIGKILL, SIGSTOP)
○ Failovers during different system state (at startup / trading day / during auction)
24. 24 Build Software to Test Software exactpro.com
• Failover – failure of active primary
instance (standby becomes active)
• Failback – failure of active standby
instance
• Standby failure – failure of passive
standby instance
• Double failure – simultaneous failure of
both instances
What cases to test?
25. 25 Build Software to Test Software exactpro.com
• Test-manager with DSL scenario language
• System monitoring tools
• Load injection tool
• Traffic capturing and parsing tools
• Tools for data storage, visualisation and analysis
What tools we use to do resilience testing?
26. 26 Build Software to Test Software exactpro.com
What kind of data is useful to analyse test results?
• System metrics of all servers and all components (processes)
• Captured traffic of injected load and system responses
• Log files of the system
28. 28 Build Software to Test Software exactpro.com
Defects mining in collected data
● Log entries per second
● Warnings per second
● Errors per second
● Transaction statistics
● Response time (latency)
● Throughput
● Disk usage
● RAM usage
● CPU usage
● Network stats
System statistics Captured traffic Log files
29. 29 Build Software to Test Software exactpro.com
Avoiding «dark data»
Symptoms of «dark data» disease:
● Collecting data «just in case» without
knowing the actual purpose of it
● Storing excessive amount of history data
(in non-aggregated form) from previous
test runs
30. 30 Build Software to Test Software exactpro.com
Overnight low touch testing
● Testing is performed without human participation
● Human friendly reports
● Data is our main value. Non-aggregated data is stored until report is seen by QA engineer
(in case if more detailed investigation is needed afterwards)
Test execution
Real-time data
collection and
processing
Performed by machine Performed by human
Prepare
environment
and test tools
Final report
evaluation
Performed by human
31. 31 Build Software to Test Software exactpro.com
Rules and thresholds
ALERT:
METRIC : RSS
GROWTH : 1GB
TIME : 10 MIN
ALERT:
METRIC : DISK
GROWTH : 10%
TIME : 1 HOUR
Server: MP101
Process: MatchingEngine Primary
Metric: RSS (resident set size)
32. 32 Build Software to Test Software exactpro.com
Spikes and stairs detection
Server: OE102
Process: FixGateway Standby
Metric: RSS (resident set size)
33. 33 Build Software to Test Software exactpro.com
Spikes and stairs detection
Example:
• CPU usage spike happened on
TransactionRouter component at ~11:49
• Most likely last scenario step done prior to
11:49 caused that spike
• Information about this abnormality and steps
that produced it will be populated in final
report
Server: CA104
Process: TransactionRouter Primary
Metric: CPU usage
34. 34 Build Software to Test Software exactpro.com
Data reconciliation checks
● Consistency across different data streams
○ Client’s messages
○ Public market data
○ Aggregated market data
● Consistency between data streams and system’s database
35. 35 Build Software to Test Software exactpro.com
DSL scenario example
start load 3000
# Case 1: Failover of MatchingEngine Primary
kill -9 primary MatchingEngine
smoke
start primary MatchingEngine
# Case 2: Failback of MatchingEngine Standby
kill -9 standby MatchingEngine
smoke
start standby MatchingEngine
stop load
1
2
3
4
5
6
7
8
9
10
11
12
13
14
36. 36 Build Software to Test Software exactpro.com
Report produced by Test Manager
37. 37 Build Software to Test Software exactpro.com
What do we get?
• Test harness needs constant support
• Higher tester qualification for improving automated
scenarios
• Validators may pass an issue that a tester could
have noticed in real time
• Need of regular review of test cases and methods
of data analysis (to prevent pesticide paradox)
• Better test coverage in comparison with
manual execution
• Test environments used 24/7 (an idle system
does not help to find issues)
• Efforts put into test coverage and tools
improvement, but not test execution
Pros: Cons:
38. Build Software to Test Software
exactpro.com
Defects mining in Exchange trading systems
08/11/2018
Pavel Medvedev, Stanislav Klimakov, Mikhail Yamkovy
39. 39 Build Software to Test Software exactpro.com
Introduction
● Challenges of proprietary software testing in the client’s environment
● Monitoring tools deployment
● Data collection challenges
● Data storage and analysis challenges
40. 40 Build Software to Test Software exactpro.com
Production and production like
● Legacy: stable, trusted, suitable to work with a particular system
● No ability to make changes in runtime
● No Docker, AppImage and other handy tools
● Portable tools are everything
41. 41 Build Software to Test Software exactpro.com
???? ??
Proprietary software in the client’s environment
● Not a complete specification
● Unknown data exchange and storage formats
● Access and other restrictions
Gateway Sequencer Matching MarketData
Test DB Test DB
in? out? in? out? in? out? in? out?
FIX ITCH
Internal system messages
42. 42 Build Software to Test Software exactpro.com
What kinds of data do we need and when?
● Pre-SOD: system snapshots and backups
● Real-time: system metrics for active testing
● Post-EOD: log data for passive testing and results analysis
43. 43 Build Software to Test Software exactpro.com
How to collect data in real-time?
● Use of available system tools
● Use of monitoring provided by a proprietary software vendor
● Use of third party monitoring tools
44. 44 Build Software to Test Software exactpro.com
How about reinventing the wheel?
● Independent
● Incorporate all the features we need in one tool
● Remote controlled
● Support of different output formats: protobuf, json, raw binary data
● Support of multiple data consumers with different visibility
● Deliver data on need to know basis only
● Uniform data format across all environments
● Low footprint
45. 45 Build Software to Test Software exactpro.com
Downsides of the brand new bicycle
● Green code: not well tested in the field
● Requires additional resources for support
● Solves only a particular problem
46. 46 Build Software to Test Software exactpro.com
Who should receive real-time data?
● Different tests require dozens of different metrics
● A tester is not able to track all the changes
● All the data should be analyzed on the fly
● Test behaviour should be changed depending on the received data
47. 47 Build Software to Test Software exactpro.com
High level view on real-time monitoring
...
Management
Server
QA Server
Server 1
Server 2
Server N
Router
Daemon_M
Daemon_S1
Daemon_S2
Daemon_SN
Daemon_I
TM
Daemon_S
Collecting system info, logs parsing,
commands execution
Collecting system info, logs parsing,
commands execution
Load control and test scripts
execution
Communication between daemons
and controllers
TestManager: Automated execution of test
scenarios, collecting and processing test
information
Daemon_M
Daemon_I
Router
TM
Data
Processor
Data
Processor
Transform, collect and store data for future
use
Data visualisation and reporting
Data storage and analysis
Data analysis and management
48. 48 Build Software to Test Software exactpro.com
Passive monitoring
Management Server
TestManager
Data
Processor
Router
Matching Server
Daemon MEP
MatchingEnginePrimary
Matching log
Monitoring Server
Daemon MON
System events log
System metrics log
MatchingEnginePrimary {PID: 1234, RSS: 500MB, CP Usage: 15%}
System
MatchingEnginePrimary {STATE: READY}
MatchingEnginePrimary {INTERNAL LATENCY: 10}
System {CPU Usage: 15%, Free Mem: 50%, Free Disk Space: 80%}
The MON Daemon collects system
metrics and messages
The MEP Daemon parses matching log
and provides router with actual system
info
49. 49 Build Software to Test Software exactpro.com
Active monitoring
Management Server
TestManager
Data
Processor
Router
Matching Server
Daemon MEP
MatchingEnginePrimary
Matching log
Monitoring Server
Daemon MON
System events log
System metrics log
System
System {CPU Usage: 1%, Free Mem: 75%, Free Disk Space: 83%}
Stop matching log monitor When realtime data is not required user
or an automated scenario can stop or
update a task for an active monitor to
reduce system load.
RPC call
50. 50 Build Software to Test Software exactpro.com
Post-EOD data
● Checkpoints from the TestManager tool
● System and hardware usage stats
● Essential internal metrics from the system under test
51. 51 Build Software to Test Software exactpro.com
What’s wrong with system logs?
Bias: logs should be human friendly
...
~|=============================================================================
~|Disk I/O statistics
~|=============================================================================
~|Device Reads/sec Writes/sec AvgQSize AvgWait AvgSrvceTime
~|sda 0.0 ( 0.0kB) 4.1 ( 22.4kB) 0.0 0.0ms 0.0ms
~|sdb 0.0 ( 0.0kB) 0.0 ( 0.0kB) 0.0 0.0ms 0.0ms
~|sdc 0.0 ( 0.0kB) 10.7 ( 70.5kB) 0.0 0.0ms 0.0ms
20181030074410.191|504|TEXT |System Memory Information (from /proc/meminfo)
~|=============================================================================
~|MemTotal: 263868528 kB
~|MemFree: 252390192 kB
...
52. 52 Build Software to Test Software exactpro.com
What’s wrong with system logs?
Not standardized
Release 1:
Release 2:
Oct 30 2017 13:30:28 | SystemComponent:1 | Transfer Queue| Rate=0.00
(W=0.00,L=0.00, Q=0.00, T=0.00), Max Queues=(Pub=0, Pvt=0),
Dec 12 2017 08:10:13 | SystemComponent:1 | Transfer Queue from Rcv Thread to Main
Thread | Rate=0.00 | W=0.00 | L=0.00 | Q=0.00 | T=0.00
Dec 12 2017 08:10:13 | SystemComponent:1 | Max Queues from Rcv Thread to Main Thread
| Pub=0, Pvt=0
53. 53 Build Software to Test Software exactpro.com
How to deal with creative loggers?
● Accept the reality
● No one will change log format just for you
● No one will ask you prior to log format change
● Regexpish patterns are our “best friends”
● Automatic log formats analysis
UNKNOWN METRIC DETECTED:
[SystemComponent:1]: A To B | Rate=0.00 (W=0.00,L=0.00, Q=0.00, T=0.00), Mode=LOW_LATENCY,
Max Queues=(Pub=0, Pvt=0)
KNOWN METRICS:
[SystemComponent:1]: AToB | Rate=0.00 [W=0.00,L=0.00, Q=0.00, T=0.00], Mode=LOW_LATENCY,
Max Queues=[Pub=0, Pvt=0]
[SystemComponent:1]: ABToWorker | Rate=0.00 (W=0.00,L=0.00, Q=0.00, T=0.00),
Mode=LOW_LATENCY, Max Queues=(Pub=0, Pvt=0)
54. 54 Build Software to Test Software exactpro.com
Where to store and how long?
● Data is sensitive and should be stored on the client’s side
● Data volume is huge for limited hardware resources in the test environment
● Data retention
● HW stats
● System merics
● System configs
● Traffic
● Anonymous production data
● System configs
● Aggregated test reports
Current data Historical data
2 weeks
55. 55 Build Software to Test Software exactpro.com
How to use?
● Reporting
● Analysis
● Tests improvement
59. 59 Build Software to Test Software exactpro.com
Tests improvement
● Comparison of test conditions
● Comparison of test results
● Inspect historical data to introduce more realistic scenarios
60. 60 Build Software to Test Software exactpro.com
Software Testing is Relentless Learning