We discuss the concepts behind CrXPRT 2015, its development process, as well as the application’s structure. We also describe the component tests and detail the statistical processes used to calculate expected battery life.
We discuss the structure of WebXPRT 2015, including the Web technologies and libraries WebXPRT uses, and give detailed descriptions of its component tests and workloads. In addition, we provide a spreadsheet that shows how the benchmark converts raw timings into component test and overall scores. Finally, we show how to automate the benchmark and how to submit test results.
Software construction is an exercise in managing complexity, more so with the spiralling complexity required by modern games. Automated Testing is an industry proven methodology to deliver more reliable complex software, with a fighting chance to do it on time and on budget. And having fun doing so. Crytek is spearheading this idea in the game industry with its flagship title, and now sharing the experience with you: best practices, potential pitfalls, To-Do’s and No-No’s will be shown with real examples of unit testing game code using its proprietary testing framework and tools. Functional Testing and acceptance testing will also be touched on as a viable way of describing and checking game design requirements. And take automated testing to the next level.
Hyper-pragmatic Pure FP testing with distage-testkit7mind
Having a proper test suite can turn ongoing application maintenance and development into pure joy – the best tests check meaningful properties, not the implementation details, and hold no impliict assumptions about their test environment - every test case must be self-contained and portable. To ensure that tests are free of implementation details and environment dependency, we may simply run them in a different test environment, with different implementations of components. But the boileplate and manual work involved in rewiring components, writing hardcoded fixtures and setting up different test environments make this very hard to do at scale. To tackle this problem we've created distage & distage-testkit, distage-testkit gives you the following superpowers:
* ability to easily swap out individual components or entire test environments
* principled & leak-free control of global resources for integration testing – docker containers, DBs, DDLs
* execute effects or allocate resources per-test, e.g. generate random fixtures per-test
* first-class testing of functional effects
* write tests as lambdas – access test fixtures via parameters or ZIO Environment
...and more! We'll also discuss general testing practices and what really distinguishes good tests from great tests.
Ginsbourg.com - Performance and Load Test Report Template LTR 1.5Shay Ginsbourg
This document summarizes the results of load and performance tests conducted on a web application. The tests were run using JMeter on Linux servers with varying numbers of virtual users from 10 to 200. Key metrics like response times, throughput, memory and CPU usage were monitored during the tests to analyze system performance under increasing load. The results revealed that the application could handle up to 150 concurrent users before degradation in response times.
This thesis aims to identify function constructors, which emulate object-oriented classes, in JavaScript programs through static source code analysis. The author develops an approach called JSDeodorant that parses JavaScript code and builds a hierarchical model to detect function constructors declared locally, in namespaces, or modules. An evaluation on 29 programs shows JSDeodorant achieves high precision (97%) and recall (98%), outperforming an existing tool. An empirical study also analyzes the use of classes across different JavaScript domains.
A walkthrough of the benefits, drawbacks, new features, important "gotchas" and some code samples using the testing features available in Grails 1.1.
We'll be covering:
- unit testing (specifically GrailsUnitTestCase and it's extensions)
- integration testing
- functional testing (using WebTest)
This document outlines the agenda for a day 2 MTLM training. The topics covered include recapping test case planning, management, and execution. Creating basic CodedUI tests from test cases and manual recordings will be demonstrated. Customizing the UIMap and code for optimization is also on the agenda. Data driven tests and assertions will be discussed. Troubleshooting CodedUI, common practices and questions will be addressed. Configuring builds to execute CodedUI tests from Visual Studio and Microsoft Test Manager will be shown. Associating automation with test cases and executing from MTM is included. Additional topics are MTLM, Scrum methodologies, lab management, test analytics, and using MTLM with Azure projects.
Many Scala developers nowadays consider using Dependency Injection frameworks an anti-pattern incompatible with modern FP settings. We argue that it's just a consequence of a bad experience with legacy Java runtime reflection-based implementations that lack features important for modern functional programming, such as a first-class support for higher-kinded types. We argue that as a paradigm for structuring purely functional programs, DI with automatic wiring compares favorably against implicits, monad transformers, free monads, algebraic effects, cake pattern et al, enabling scaling and a degree of modularity unachievable by any manual wiring approach. This talk covers DIStage – a transparent, flexible and efficient DI framework for Scala that enables late binding, testability, effect separation and modular resource management at scale, working with, instead of compromising the Scala type system.
Documentation: https://izumi.7mind.io/latest/release/doc/distage/
We discuss the structure of WebXPRT 2015, including the Web technologies and libraries WebXPRT uses, and give detailed descriptions of its component tests and workloads. In addition, we provide a spreadsheet that shows how the benchmark converts raw timings into component test and overall scores. Finally, we show how to automate the benchmark and how to submit test results.
Software construction is an exercise in managing complexity, more so with the spiralling complexity required by modern games. Automated Testing is an industry proven methodology to deliver more reliable complex software, with a fighting chance to do it on time and on budget. And having fun doing so. Crytek is spearheading this idea in the game industry with its flagship title, and now sharing the experience with you: best practices, potential pitfalls, To-Do’s and No-No’s will be shown with real examples of unit testing game code using its proprietary testing framework and tools. Functional Testing and acceptance testing will also be touched on as a viable way of describing and checking game design requirements. And take automated testing to the next level.
Hyper-pragmatic Pure FP testing with distage-testkit7mind
Having a proper test suite can turn ongoing application maintenance and development into pure joy – the best tests check meaningful properties, not the implementation details, and hold no impliict assumptions about their test environment - every test case must be self-contained and portable. To ensure that tests are free of implementation details and environment dependency, we may simply run them in a different test environment, with different implementations of components. But the boileplate and manual work involved in rewiring components, writing hardcoded fixtures and setting up different test environments make this very hard to do at scale. To tackle this problem we've created distage & distage-testkit, distage-testkit gives you the following superpowers:
* ability to easily swap out individual components or entire test environments
* principled & leak-free control of global resources for integration testing – docker containers, DBs, DDLs
* execute effects or allocate resources per-test, e.g. generate random fixtures per-test
* first-class testing of functional effects
* write tests as lambdas – access test fixtures via parameters or ZIO Environment
...and more! We'll also discuss general testing practices and what really distinguishes good tests from great tests.
Ginsbourg.com - Performance and Load Test Report Template LTR 1.5Shay Ginsbourg
This document summarizes the results of load and performance tests conducted on a web application. The tests were run using JMeter on Linux servers with varying numbers of virtual users from 10 to 200. Key metrics like response times, throughput, memory and CPU usage were monitored during the tests to analyze system performance under increasing load. The results revealed that the application could handle up to 150 concurrent users before degradation in response times.
This thesis aims to identify function constructors, which emulate object-oriented classes, in JavaScript programs through static source code analysis. The author develops an approach called JSDeodorant that parses JavaScript code and builds a hierarchical model to detect function constructors declared locally, in namespaces, or modules. An evaluation on 29 programs shows JSDeodorant achieves high precision (97%) and recall (98%), outperforming an existing tool. An empirical study also analyzes the use of classes across different JavaScript domains.
A walkthrough of the benefits, drawbacks, new features, important "gotchas" and some code samples using the testing features available in Grails 1.1.
We'll be covering:
- unit testing (specifically GrailsUnitTestCase and it's extensions)
- integration testing
- functional testing (using WebTest)
This document outlines the agenda for a day 2 MTLM training. The topics covered include recapping test case planning, management, and execution. Creating basic CodedUI tests from test cases and manual recordings will be demonstrated. Customizing the UIMap and code for optimization is also on the agenda. Data driven tests and assertions will be discussed. Troubleshooting CodedUI, common practices and questions will be addressed. Configuring builds to execute CodedUI tests from Visual Studio and Microsoft Test Manager will be shown. Associating automation with test cases and executing from MTM is included. Additional topics are MTLM, Scrum methodologies, lab management, test analytics, and using MTLM with Azure projects.
Many Scala developers nowadays consider using Dependency Injection frameworks an anti-pattern incompatible with modern FP settings. We argue that it's just a consequence of a bad experience with legacy Java runtime reflection-based implementations that lack features important for modern functional programming, such as a first-class support for higher-kinded types. We argue that as a paradigm for structuring purely functional programs, DI with automatic wiring compares favorably against implicits, monad transformers, free monads, algebraic effects, cake pattern et al, enabling scaling and a degree of modularity unachievable by any manual wiring approach. This talk covers DIStage – a transparent, flexible and efficient DI framework for Scala that enables late binding, testability, effect separation and modular resource management at scale, working with, instead of compromising the Scala type system.
Documentation: https://izumi.7mind.io/latest/release/doc/distage/
If a tablet can’t integrate into your secure enterprise environment well enough to perform basic tasks for your workers, it has no business in your workplace. In our tests, the Fujitsu STYLISTIC Q702 tablet powered by an Intel Core i5 vPro processor and Windows 8 not only completed common scenarios generally more quickly than the third-generation Apple iPad, it did so without encountering any integration issues for the tasks we tested. The Intel Core i5 vPro processor-based Windows 8 tablet was able to open, edit, and save documents as we intended, and let us be full participants in a WebEx conference. In contrast, the Apple iPad was unable to get the job done with these simple, everyday tasks.
The document is a test report that evaluates the performance of two Chromebook devices - an Intel processor-powered Toshiba Chromebook and an ARM processor-based Samsung Chromebook 2. The report found that the Toshiba Chromebook had significantly less waiting time across common student tasks, delivered over 13% longer battery life while browsing, and provided up to 4 times as many frames per second for 3D graphics. The report concludes that with the Toshiba Chromebook, students can spend less time waiting and more time learning.
Dell PowerEdge R930 with Oracle: The benefits of upgrading to PCIe storage us...Principled Technologies
Strong server performance is essential to companies running Oracle Database. The new Dell PowerEdge R930 provided strong performance with 22 SAS HDDs, but this performance improved when we replaced all of the drives with SAS solid-state drives. It improved further when we used a mix of HDDs and SDDs along with SanDisk DAS Cache. We saw the greatest performance boost when we used eight PCIe SSDs with SanDisk DAS Cache. The upgraded configuration of the Dell PowerEdge R930 with PCIe SSDs and SanDisk DAS Cache delivered 11.1 times the database performance of the all-HDD configuration. This makes the new Dell PowerEdge R930 a powerful platform with scalable storage options that can potentially translate into significant service improvements for your business and your customers, which helps in maximizing ROI.
VMmark 2.5.2 virtualization performance of the Dell EqualLogic PS6210XS stora...Principled Technologies
Virtualization is a critical part of data center computing. For your virtualization solution to succeed, it is essential that you have a storage platform capable of delivering the performance and capacity needed for a virtualized environment in a cost effective way. The Dell EqualLogic PS6210XS array, paired with a cluster of Dell PowerEdge M620 servers, ran 12 VMmark tiles for a total of 96 running VMs, and achieved a score of 14.80@12. This performance, along with its value and ease of management, make the Dell EqualLogic PS6210XS array an excellent investment.
Workstation heat and power usage: Lenovo ThinkStation P500 vs. Dell Precision...Principled Technologies
A workstation that runs coolly and uses less power is a great asset to workers and the companies they work for. In our tests, both when idle and when under load, the Lenovo ThinkStation P500 generally ran at lower surface temperatures and used less power than the Dell Precision T5810 Workstation. These findings show that the Lenovo ThinkStation P500 could meet the needs of those who want to provide a reliable, comfortable work environment while using less power.
Performance and durability comparison: Dell Latitude 14 5000 Series vs. Lenov...Principled Technologies
The document summarizes a performance and durability test of the Dell Latitude 14 5000 Series laptop against the Lenovo ThinkPad T440s. Key findings include:
- The Dell laptop was able to withstand three drops from 29 inches without damage to the hard drive, while the Lenovo suffered minor hard drive damage after the second drop.
- The Dell laptop had significantly longer battery life than the Lenovo, lasting over 2.5 hours more on a single charge.
- Performance on benchmarks was comparable between the two laptops, with some tests showing small advantages for one model or the other.
CrXPRT reliably evaluates the performance and battery life of devices running the Google Chrome operating system (OS). The benchmark provides an intuitive user interface, a runtime allowing it to be completed within half of a typical work day, and easy-to-understand results.
We discuss the newest version of MobileXPRT, including its development process, new features, and details of the individual test workloads. We also explain exactly how the benchmark calculates results.
We discuss the guiding concepts behind BatteryXPRT 2014 for Android’s development, as well as the benchmark’s structure. We describe the component tests; explain the differences between the app’s Airplane, Wi-Fi, and Cellular modes; and detail the statistical processes used to calculate expected
This document outlines a performance test plan for Sakai 2.5.0. It describes the objectives, approach, test types, metrics, goals, tools, and data preparation. The objectives are to validate Sakai meets minimum performance standards and test any new or changed tools. Tests include capacity, consistent load, and single function stress tests. Metrics like response time, CPU utilization, and errors will be measured. Goals include average response time under 2.5s and max under 30s, CPU under 75%, and 500 concurrent users supported. Silk Performer will be used to run tests against a Sakai/Tomcat/Oracle environment. Over 92,000 students and 1,557 instructors of data will be preloaded
GCC Compiler as a Performance Testing tool for C programsDaniel Ilunga
This paper introduces you to GCC Compiler as a Performance testing tool. GCC Compiler has never been used before as a Performance Testing, but in this paper we’ll be discussing various points on how this can be implemented based on the objective of a Performance testing.
In performance testing, we don’t focus on the bugs in the code that we are testing, but its purpose being of removing the bottlenecks in the codes by taking under consideration parameters like CPU time, memory usage, Speed, Stability, scalability and so on.
To achieve this, we have used the GCC optimization options; different levels of optimization are also discussed as well as a comparative study between manual testing and automatic testing (by using Unix/Linux commands).
The document provides a master test specification for testing the Simple Railroad Command Protocol (SRCP). It outlines the test plan including using black box and white box testing techniques. The test plan defines the test levels, environment, tools, and schedule. Key test areas are identified as network communication, SRCP connection modes, and general valid/invalid requirements. Requirements for testing are specified, including general requirements related to SRCP servers, commands, replies, and the handshake process.
This document describes the extraction and transformation module of an FDA web content mining project. It discusses using Selenium with Java to extract content from the FDA website, including news listings, dates, and links to PDFs and ZIP files. XPath is used to locate elements on pages when IDs are not available. The execution flow starts a web driver session, navigates to the news page, collects listings and nested links into a hash map, then extracts linked content and details into files and a CSV.
This thesis proposes and evaluates several SDN-enabled traffic engineering solutions:
1. An OVX Testing Framework to test the OpenVirteX network hypervisor which virtualizes OpenFlow networks.
2. A Command Line Interface for the ONOS Segment Routing application to retrieve switch statistics and configure tunnels/policies.
3. An emulation of packet-optical networks using ONOS as a multi-layer SDN controller to optimize traffic flow across packet and optical domains.
4. A Maximum Weighted Alpha scheduling algorithm for input queued switches to provide throughput optimality under hybrid data traffic loads. Performance is evaluated in an SDN testbed.
The document provides an overview of the ISTQB Agile Tester certification. It begins by comparing traditional waterfall software development methodology to agile methodology. With waterfall, requirements are gathered upfront and the customer only sees the final product, while with agile development is iterative with working software delivered in short iterations. An example compares developing a word processing competitor under the two methodologies. The rest of the document outlines agile principles, practices for testing in agile, roles of testers, agile testing techniques and tools.
1. Automated testing with QuickTest addresses the drawbacks of manual testing by dramatically speeding up the testing process through reliable and repeatable automated tests.
2. The QuickTest testing process consists of 7 main phases: preparing to record, recording a session, enhancing the test, debugging the test, running the test, analyzing results, and reporting defects.
3. The main QuickTest window contains elements like toolbars, panes, and views to assist with the testing process.
This document provides a test specification for the Simple Rail Road Command Protocol (SRCP) version 0.8.3. It outlines a test plan with the following key elements:
1. The test environment includes an SRCP daemon to emulate the SRCP server and digital plant, a Linux server to run the daemon, and client programs on Linux or Windows to send commands.
2. The test plan has six phases - test planning, test case generation, test case execution, analysis of test results, bug analysis, and documentation.
3. Test cases will be generated for three main areas - hand shake, commands, and info - based on analyzing the SRCP specification and source code to evaluate requirements compliance and
MobileXPRT is a benchmark to evaluate the performance of Android devices. MobileXPRT tests
are based on real-world user scenarios and produce user-relevant metrics and results.
This document discusses various techniques for optimizing performance in C code, including compiler optimizations, intermediate representation (IR) level optimizations, loop optimizations, data structure optimizations, and other techniques. It provides examples of applying these optimizations to a specific C code, such as using compiler flags tailored for the target CPU architecture, constant folding, redundancy elimination, and algebraic simplifications. The optimizations are able to significantly improve the execution time of the code, reducing it from over 18 minutes to under 40 seconds.
BatteryXPRT for Android benchmark reliably evaluates the battery life of Android-based devices. The benchmark provides an intuitive user interface, a runtime allowing it to be completed within one work day, and easy-to-understand results.
This performance test plan outlines objectives to compare the responsiveness and resource utilization of an application between a current and new production environment. It defines dependencies, limitations, the testing process, and deliverables. Performance testing will be done using JMeter to simulate anticipated workload and compare metrics between environments. Results will be analyzed to identify any bottlenecks before moving to the new environment.
This performance test plan outlines objectives to compare the responsiveness and resource utilization of a current production system and a new proposed production system. It defines the scope, dependencies, and risks. Tools like JMeter and PerfMon will be used to execute load tests on the systems and analyze results. Performance testing activities include installing tools, implementing tests, executing tests at typical loads, monitoring results, and delivering a test plan, results, and metrics.
If a tablet can’t integrate into your secure enterprise environment well enough to perform basic tasks for your workers, it has no business in your workplace. In our tests, the Fujitsu STYLISTIC Q702 tablet powered by an Intel Core i5 vPro processor and Windows 8 not only completed common scenarios generally more quickly than the third-generation Apple iPad, it did so without encountering any integration issues for the tasks we tested. The Intel Core i5 vPro processor-based Windows 8 tablet was able to open, edit, and save documents as we intended, and let us be full participants in a WebEx conference. In contrast, the Apple iPad was unable to get the job done with these simple, everyday tasks.
The document is a test report that evaluates the performance of two Chromebook devices - an Intel processor-powered Toshiba Chromebook and an ARM processor-based Samsung Chromebook 2. The report found that the Toshiba Chromebook had significantly less waiting time across common student tasks, delivered over 13% longer battery life while browsing, and provided up to 4 times as many frames per second for 3D graphics. The report concludes that with the Toshiba Chromebook, students can spend less time waiting and more time learning.
Dell PowerEdge R930 with Oracle: The benefits of upgrading to PCIe storage us...Principled Technologies
Strong server performance is essential to companies running Oracle Database. The new Dell PowerEdge R930 provided strong performance with 22 SAS HDDs, but this performance improved when we replaced all of the drives with SAS solid-state drives. It improved further when we used a mix of HDDs and SDDs along with SanDisk DAS Cache. We saw the greatest performance boost when we used eight PCIe SSDs with SanDisk DAS Cache. The upgraded configuration of the Dell PowerEdge R930 with PCIe SSDs and SanDisk DAS Cache delivered 11.1 times the database performance of the all-HDD configuration. This makes the new Dell PowerEdge R930 a powerful platform with scalable storage options that can potentially translate into significant service improvements for your business and your customers, which helps in maximizing ROI.
VMmark 2.5.2 virtualization performance of the Dell EqualLogic PS6210XS stora...Principled Technologies
Virtualization is a critical part of data center computing. For your virtualization solution to succeed, it is essential that you have a storage platform capable of delivering the performance and capacity needed for a virtualized environment in a cost effective way. The Dell EqualLogic PS6210XS array, paired with a cluster of Dell PowerEdge M620 servers, ran 12 VMmark tiles for a total of 96 running VMs, and achieved a score of 14.80@12. This performance, along with its value and ease of management, make the Dell EqualLogic PS6210XS array an excellent investment.
Workstation heat and power usage: Lenovo ThinkStation P500 vs. Dell Precision...Principled Technologies
A workstation that runs coolly and uses less power is a great asset to workers and the companies they work for. In our tests, both when idle and when under load, the Lenovo ThinkStation P500 generally ran at lower surface temperatures and used less power than the Dell Precision T5810 Workstation. These findings show that the Lenovo ThinkStation P500 could meet the needs of those who want to provide a reliable, comfortable work environment while using less power.
Performance and durability comparison: Dell Latitude 14 5000 Series vs. Lenov...Principled Technologies
The document summarizes a performance and durability test of the Dell Latitude 14 5000 Series laptop against the Lenovo ThinkPad T440s. Key findings include:
- The Dell laptop was able to withstand three drops from 29 inches without damage to the hard drive, while the Lenovo suffered minor hard drive damage after the second drop.
- The Dell laptop had significantly longer battery life than the Lenovo, lasting over 2.5 hours more on a single charge.
- Performance on benchmarks was comparable between the two laptops, with some tests showing small advantages for one model or the other.
CrXPRT reliably evaluates the performance and battery life of devices running the Google Chrome operating system (OS). The benchmark provides an intuitive user interface, a runtime allowing it to be completed within half of a typical work day, and easy-to-understand results.
We discuss the newest version of MobileXPRT, including its development process, new features, and details of the individual test workloads. We also explain exactly how the benchmark calculates results.
We discuss the guiding concepts behind BatteryXPRT 2014 for Android’s development, as well as the benchmark’s structure. We describe the component tests; explain the differences between the app’s Airplane, Wi-Fi, and Cellular modes; and detail the statistical processes used to calculate expected
This document outlines a performance test plan for Sakai 2.5.0. It describes the objectives, approach, test types, metrics, goals, tools, and data preparation. The objectives are to validate Sakai meets minimum performance standards and test any new or changed tools. Tests include capacity, consistent load, and single function stress tests. Metrics like response time, CPU utilization, and errors will be measured. Goals include average response time under 2.5s and max under 30s, CPU under 75%, and 500 concurrent users supported. Silk Performer will be used to run tests against a Sakai/Tomcat/Oracle environment. Over 92,000 students and 1,557 instructors of data will be preloaded
GCC Compiler as a Performance Testing tool for C programsDaniel Ilunga
This paper introduces you to GCC Compiler as a Performance testing tool. GCC Compiler has never been used before as a Performance Testing, but in this paper we’ll be discussing various points on how this can be implemented based on the objective of a Performance testing.
In performance testing, we don’t focus on the bugs in the code that we are testing, but its purpose being of removing the bottlenecks in the codes by taking under consideration parameters like CPU time, memory usage, Speed, Stability, scalability and so on.
To achieve this, we have used the GCC optimization options; different levels of optimization are also discussed as well as a comparative study between manual testing and automatic testing (by using Unix/Linux commands).
The document provides a master test specification for testing the Simple Railroad Command Protocol (SRCP). It outlines the test plan including using black box and white box testing techniques. The test plan defines the test levels, environment, tools, and schedule. Key test areas are identified as network communication, SRCP connection modes, and general valid/invalid requirements. Requirements for testing are specified, including general requirements related to SRCP servers, commands, replies, and the handshake process.
This document describes the extraction and transformation module of an FDA web content mining project. It discusses using Selenium with Java to extract content from the FDA website, including news listings, dates, and links to PDFs and ZIP files. XPath is used to locate elements on pages when IDs are not available. The execution flow starts a web driver session, navigates to the news page, collects listings and nested links into a hash map, then extracts linked content and details into files and a CSV.
This thesis proposes and evaluates several SDN-enabled traffic engineering solutions:
1. An OVX Testing Framework to test the OpenVirteX network hypervisor which virtualizes OpenFlow networks.
2. A Command Line Interface for the ONOS Segment Routing application to retrieve switch statistics and configure tunnels/policies.
3. An emulation of packet-optical networks using ONOS as a multi-layer SDN controller to optimize traffic flow across packet and optical domains.
4. A Maximum Weighted Alpha scheduling algorithm for input queued switches to provide throughput optimality under hybrid data traffic loads. Performance is evaluated in an SDN testbed.
The document provides an overview of the ISTQB Agile Tester certification. It begins by comparing traditional waterfall software development methodology to agile methodology. With waterfall, requirements are gathered upfront and the customer only sees the final product, while with agile development is iterative with working software delivered in short iterations. An example compares developing a word processing competitor under the two methodologies. The rest of the document outlines agile principles, practices for testing in agile, roles of testers, agile testing techniques and tools.
1. Automated testing with QuickTest addresses the drawbacks of manual testing by dramatically speeding up the testing process through reliable and repeatable automated tests.
2. The QuickTest testing process consists of 7 main phases: preparing to record, recording a session, enhancing the test, debugging the test, running the test, analyzing results, and reporting defects.
3. The main QuickTest window contains elements like toolbars, panes, and views to assist with the testing process.
This document provides a test specification for the Simple Rail Road Command Protocol (SRCP) version 0.8.3. It outlines a test plan with the following key elements:
1. The test environment includes an SRCP daemon to emulate the SRCP server and digital plant, a Linux server to run the daemon, and client programs on Linux or Windows to send commands.
2. The test plan has six phases - test planning, test case generation, test case execution, analysis of test results, bug analysis, and documentation.
3. Test cases will be generated for three main areas - hand shake, commands, and info - based on analyzing the SRCP specification and source code to evaluate requirements compliance and
MobileXPRT is a benchmark to evaluate the performance of Android devices. MobileXPRT tests
are based on real-world user scenarios and produce user-relevant metrics and results.
This document discusses various techniques for optimizing performance in C code, including compiler optimizations, intermediate representation (IR) level optimizations, loop optimizations, data structure optimizations, and other techniques. It provides examples of applying these optimizations to a specific C code, such as using compiler flags tailored for the target CPU architecture, constant folding, redundancy elimination, and algebraic simplifications. The optimizations are able to significantly improve the execution time of the code, reducing it from over 18 minutes to under 40 seconds.
BatteryXPRT for Android benchmark reliably evaluates the battery life of Android-based devices. The benchmark provides an intuitive user interface, a runtime allowing it to be completed within one work day, and easy-to-understand results.
This performance test plan outlines objectives to compare the responsiveness and resource utilization of an application between a current and new production environment. It defines dependencies, limitations, the testing process, and deliverables. Performance testing will be done using JMeter to simulate anticipated workload and compare metrics between environments. Results will be analyzed to identify any bottlenecks before moving to the new environment.
This performance test plan outlines objectives to compare the responsiveness and resource utilization of a current production system and a new proposed production system. It defines the scope, dependencies, and risks. Tools like JMeter and PerfMon will be used to execute load tests on the systems and analyze results. Performance testing activities include installing tools, implementing tests, executing tests at typical loads, monitoring results, and delivering a test plan, results, and metrics.
This guide is an extract from the two and three day course provided by me. It spans the complete testing lifecycle and the tool usages. It will look at the infrastructure implications and testing practices in formal and in agile teams. But, the main focus stays on the usages of the Microsoft Visual Studio testing tools, the knowledge you need to get starting with it, the practices you must have to work with it in real live and how you can bend the tools, with extensibility and normal use to your team needs.
The course and this guide is work in progress. It is not a testing training (I expect you already have testing knowledge), if you need that test process information I refer to the TMap website from Sogeti where you can find tons of information. This training guide follows the TMap testing lifecycle.
BETA work in progress, I add every training new material and tune current material.
This project is aimed at developing an online movie ticket booking system website for customers.Online movie ticket booking system is a project developed for booking movie ticket online.This project saves lots of time and reduces the work of the customer.In online movie ticket booking system booking the movie ticket can be done from anywhere and at any time(24*7).some features provided to the users are new registration,login in,see movies by category,compare ticket price and timing,Customer can book ticket online without registration but if he/she registers then he/she will get different types of special offers,e-newsletters,movie updates and lots more The user can also cancel or update their order
- Jyoti Tyagi submitted a project titled "Modification in the behavior of Nova Filter Scheduler to enhance the performance in OpenStack cloud" as part of her MSc in Cloud Computing at the National College of Ireland.
- The project aims to assess the behavior of the nova scheduler in OpenStack with the goal of reducing latency when placing virtual machines. It analyzes the default filters of the nova scheduler to understand limitations and attempts to improve performance and resource allocation.
- An experiment was performed to modify essential filters in the scheduler to prove the concept. Various scheduling filters were analyzed to check their impact on performance by changing default values and metrics.
The document describes a project submission for a Masters student's research dissertation on modifying the nova scheduler in OpenStack to enhance performance. The student aims to reduce latency when launching virtual machines by analyzing how scheduling is affected by factors like CPU cores, memory allocation, and disk usage. The submission sheet provides details of the student's name, ID, program of study, module, supervisor, project title, word count, and certification that the work is original. It also includes submission instructions.
Help skilled workers succeed with Dell Latitude 7030 and 7230 Rugged Extreme ...Principled Technologies
Instead of equipping consumer-grade tablets with rugged cases
Conclusion
In our hands-on testing, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets showed that they are better equipped to help skilled workers than consumer-grade Apple iPad Pro and Samsung Galaxy Tab S9 tablets in multiple ways. They provide more built-in capabilities and features than the consumer-grade tablets we tested. And, while they were more expensive than the rugged-case fortified consumer-grade options we tested, their rugged claims were more than skin deep.
In our performance and durability tests, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets performed better in demanding manufacturing, logistics, and field service environments than consumer-grade tablets with rugged cases. Both Rugged Extreme Tablets, with their greater thermal range, suffered less performance degradation in extreme temperatures, never failed and were merely scuffed after 26 hard drops, survived a 10 minute drenching with no ill effects, and were easier to view in direct sunlight than Apple iPad Pro and Samsung Galaxy Tab S9 tablets.
Bring ideas to life with the HP Z2 G9 Tower Workstation - InfographicPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to a similarly configured Dell Precision 3660 Tower Workstation in its out-of-box performance mode
Investing in GenAI: Cost‑benefit analysis of Dell on‑premises deployments vs....Principled Technologies
Conclusion
Diving into the world of GenAI has the potential to yield a great many benefits for your organization, but it first requires consideration for how best to implement those GenAI workloads. Whether your AI goals are to create a chatbot for online visitors, generate marketing materials, aid troubleshooting, or something else, implementing an AI solution requires careful planning and decision-making. A major decision is whether to host GenAI in the cloud or keep your data on premises. Traditional on-premises solutions can provide superior security and control, a substantial concern when dealing with large amounts of potentially sensitive data. But will supporting a GenAI solution on site be a drain on an organization’s IT budget?
In our research, we found that the value proposition is just the opposite: Hosting GenAI workloads on premises, either in a traditional Dell solution or using a managed Dell APEX pay-per-use solution, could significantly lower your GenAI costs over 3 years compared to hosting these workloads in the cloud. In fact, we found that a comparable AWS SageMaker solution would cost up to 3.8 times as much and an Azure ML solution would cost up to 3.6 times as much as GenAI on a Dell APEX pay-per-use solution. These results show that organizations looking to implement GenAI and reap the business benefits to come can find many advantages in an on-premises Dell solution, whether they opt to purchase and manage it themselves or choose a subscription-based Dell APEX pay-per-use solution. Choosing an on-premises Dell solution could save your organization significantly over hosting GenAI in the cloud, while giving you control over the security and privacy of your data as well as any updates and changes to the environment, and while ensuring your environment is managed consistently.
Workstations powered by Intel can play a vital role in CPU-intensive AI devel...Principled Technologies
In three AI development workflows, Intel processor-powered workstations delivered strong performance, without using their GPUs, making them a good choice for this part of the AI process
Conclusion
We executed three AI development workflows on tower workstations and mobile workstations from three vendors, with each workflow utilizing only the Intel CPU cores, and found that these platforms were suitable for carrying out various AI tasks. For two of the workflows, we learned that completing the tasks on the tower workstations took roughly half as much time as on the mobile workstations. This supports the idea that the tower workstations would be appropriate for a development environment for more complex models with a greater volume of data and that the mobile workstations would be well-suited for data scientists fine-tuning simpler models. In the third workflow, we explored tower workstation performance with different precision levels and learned that using 16-bit floating point precision allowed the workstations to execute the workflow in less time and also reduced memory usage dramatically. For all three AI workflows we executed, we consider the time the workstations needed to complete the tasks to be acceptable, and believe that these workstations can be appropriate, cost-effective choices for these kinds of activities.
Enable security features with no impact to OLTP performance with Dell PowerEd...Principled Technologies
Get comparable online transaction processing (OLTP) performance with or without enabling AMD Secure Memory Encryption and AMD Secure Encrypted Virtualization - Encrypted State
Conclusion
You’ve likely already implemented many security measures for your servers, which may include physical security for the data center, hardware-level security, and software-level security. With the cost of data breaches high and still growing, however, wise IT teams will consider what additional security measures they may be able to implement.
AMD SME and SEV-ES are technologies that are already available within your AMD processor-powered 16th Generation Dell PowerEdge servers—and in our testing, we saw that they can offer extra layers of security without affecting performance. We compared the online transaction processing performance of a Dell PowerEdge R7625 server, powered by AMD EPYC 9274F processors, with and without these two security features enabled. We found that enabling AMD Secure Memory Encryption and Secure Encrypted Virtualization-Encrypted State did not impact performance at all.
If your team is assessing areas where you might be able to enhance security—without paying a large performance cost—consider enabling AME SME and AMD SEV-ES in your Dell PowerEdge servers.
Improving energy efficiency in the data center: Endure higher temperatures wi...Principled Technologies
In high-temperature test scenarios, a Dell PowerEdge HS5620 server continued running an intensive workload without component warnings or failures, while a Supermicro SYS‑621C-TN12R server failed
Conclusion: Remain resilient in high temperatures with the Dell PowerEdge HS5620 to help increase efficiency
Increasing your data center’s temperature can help your organization make strides in energy efficiency and cooling cost savings. With servers that can hold up to these higher everyday temperatures—as well as high temperatures due to unforeseen circumstances—your business can continue to deliver the performance your apps and clients require.
When we ran an intensive floating-point workload on a Dell PowerEdge HS5620 and a Supermicro SYS-621CTN12R in three scenario types simulating typical operations at 25°C, a fan failure, and an HVAC malfunction, the Dell server experienced no component warnings or failures. In contrast, the Supermicro server experienced warnings in all three scenario types and experienced component failures in the latter two tests, rendering the system unusable. When we inspected and analyzed each system, we found that the Dell PowerEdge HS5620 server’s motherboard layout, fans, and chassis offered cooling design advantages.
For businesses aiming to meet sustainability goals by running hotter data centers, as well as those concerned with server cooling design, the Dell PowerEdge HS5620 is a strong contender to take on higher temperatures during day-to-day operations and unexpected malfunctions.
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a Kubernetes container-based generative AI workload effectively
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a generative AI workload effectively
Conclusion
The appeal of incorporating GenAI into your organization’s operations is likely great. Getting started with an efficient solution for your next LLM workload or application can seem daunting because of the changing hardware and software landscape, but Dell APEX Cloud Platform for Red Hat OpenShift powered by 4th Gen Intel Xeon Scalable processors could provide the solution you need. We started with a Dell Validated Design as a reference, and then went on to modify the deployment as necessary for our Llama 2 workload. The Dell APEX Cloud Platform for Red Hat OpenShift solution worked well for our LLM, and by using this deployment guide in conjunction with numerous Dell documents and some flexibility, you could be well on your way to innovating your next GenAI breakthrough.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation (VCF)
For organizations running clusters of moderately configured, older Dell PowerEdge servers with a previous version of VCF, upgrading to better-configured modern servers can provide a significant performance boost and more.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Realize 2.1X the performance with 20% less power with AMD EPYC processor-back...Principled Technologies
Three AMD EPYC processor-based two-processor solutions outshined comparable Intel Xeon Scalable processor-based solutions by handling more Redis workload transactions and requests while consuming less power
Conclusion
Performance and energy efficiency are significant factors in processor selection for servers running data-intensive workloads, such as Redis. We compared the Redis performance and energy consumption of a server cluster in three AMD EPYC two-processor configurations against that of a server cluster in two Intel Xeon Scalable two-processor configurations. In each of our three test scenarios, the server cluster backed by AMD EPYC processors outperformed the server cluster backed by Intel Xeon Scalable processors. In addition, one of the AMD EPYC processor-based clusters consumed 20 percent less power than its Intel Xeon Scalable processor-based counterpart. Combining these measurements gave us power efficiency metrics that demonstrate how valuable AMD EPYC processor-based servers could be—you could see better performance per watt with these AMD EPYC processor-based server clusters and potentially get more from your Redis or other data intensive applications and workloads while reducing data center power costs.
Improve performance and gain room to grow by easily migrating to a modern Ope...Principled Technologies
We deployed this modern environment, then migrated database VMs from legacy servers and saw performance improvements that support consolidation
Conclusion
If your organization’s transactional databases are running on gear that is several years old, you have much to gain by upgrading to modern servers with new processors and networking components and an OpenShift environment. In our testing, a modern OpenShift environment with a cluster of three Dell PowerEdge R7615 servers with 4th Generation AMD EPYC processors and high-speed 100Gb Broadcom NICs outperformed a legacy environment with MySQL VMs running on a cluster of three Dell PowerEdge R7515 servers with 3rd Generation AMD EPYC processors and 25Gb Broadcom NICs. We also easily migrated a VM from the legacy environment to the modern environment, with only a few steps required to set up and less than ten minutes of hands-on time. The performance advantage of the modern servers would allow a company to reduce the number of servers necessary to perform a given amount of database work, thus lowering operational expenditures such as power and cooling and IT staff time for maintenance. The high-speed 100Gb Broadcom NICs in this solution also give companies better network performance and networking capacity to grow as they embrace emerging technologies such as AI that put great demands on networks.
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
With more memory available, system performance of three Dell devices increased, which can translate to a better user experience
Conclusion
When your system has plenty of RAM to meet your needs, you can efficiently access the applications and data you need to finish projects and to-do lists without sacrificing time and focus. Our test results show that with more memory available, three Dell PCs delivered better performance and took less time to complete the Procyon Office Productivity benchmark. These advantages translate to users being able to complete workflows more quickly and multitask more easily. Whether you need the mobility of the Latitude 5440, the creative capabilities of the Precision 3470, or the high performance of the OptiPlex Tower Plus 7010, configuring your system with more RAM can help keep processes running smoothly, enabling you to do more without compromising performance.
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
A Principled Technologies deployment guide
Conclusion
Deploying VMware Cloud Foundation 5.1 on next gen Dell PowerEdge servers brings together critical virtualization capabilities and high-performing hardware infrastructure. Relying on our hands-on experience, this deployment guide offers a comprehensive roadmap that can guide your organization through the seamless integration of advanced VMware cloud solutions with the performance and reliability of Dell PowerEdge servers. In addition to the deployment efficiency, the Cloud Foundation 5.1 and PowerEdge solution delivered strong performance while running a MySQL database workload. By leveraging VMware Cloud Foundation 5.1 and PowerEdge servers, you could help your organization embrace cloud computing with confidence, potentially unlocking a new level of agility, scalability, and efficiency in your data center operations.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
Conclusion
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. We found that a Dell PowerEdge R760 server cluster running VCF 5.1 processed over 78 percent more TPM and 79 percent more NOPM than a Dell PowerEdge R750 server cluster running VCF 4.5. It’s also worth noting that the PowerEdge R750 cluster bottlenecked on vSAN storage, with max write latency at 8.9ms. For reference, the PowerEdge R760 cluster clocked in at 3.8ms max write latency. This higher latency is due in part to the single disk group per host on the moderately configured PowerEdge R750 cluster, while the better-configured PowerEdge R760 cluster supported four disk groups per host. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Based on our research using publicly available materials, it appears that Dell supports nine of the ten PC security features we investigated, HP supports six of them, and Lenovo supports three features.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Scale up your storage with higher-performing Dell APEX Block Storage for AWS ...Principled Technologies
In our tests, Dell APEX Block Storage for AWS outperformed similarly configured solutions from Vendor A, achieving more IOPS, better throughput, and more consistent performance on both NVMe-supported configurations and configurations backed by Elastic Block Store (EBS) alone.
Dell APEX Block Storage for AWS supports a full NVMe backed configuration, but Vendor A doesn’t—its solution uses EBS for storage capacity and NVMe as an extended read cache—which means APEX Block Storage for AWS can deliver faster storage performance.
Scale up your storage with higher-performing Dell APEX Block Storage for AWSPrincipled Technologies
Dell APEX Block Storage for AWS offered stronger and more consistent storage performance for better business agility than a Vendor A solution
Conclusion
Enterprises desiring the flexibility and convenience of the cloud for their block storage workloads can find fast-performing solutions with the enterprise storage features they’re used to in on-premises infrastructure by selecting Dell APEX Block Storage for AWS.
Our hands-on tests showed that compared to the Vendor A solution, Dell APEX Block Storage for AWS offered stronger, more consistent storage performance in both NVMe-supported and EBS-backed configurations. Using NVMe-supported configurations, Dell APEX Block Storage for AWS achieved 4.7x the random read IOPS and 5.1x the throughput on sequential read operations per node vs. Vendor A. In our EBS-backed comparison, Dell APEX Block Storage for AWS offered 2.2x the throughput per node on sequential read operations vs. Vendor A.
Plus, the ability to scale beyond three nodes—up to 512 storage nodes with capacity of up to 8 PBs—enables Dell APEX Block Storage for AWS to help ensure performance and capacity as your team plans for the future.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
2. A Principled Technologies white paper 2Exploring CrXPRT 2015
TABLE OF CONTENTS
Introduction ...............................................................................................................................................................3
Development process ................................................................................................................................................3
CrXPRT 2015: The details...........................................................................................................................................3
About the performance test ..................................................................................................................................3
About the battery life test .....................................................................................................................................4
Test workloads.......................................................................................................................................................4
Scoring........................................................................................................................................................................5
Scoring details........................................................................................................................................................7
Excluding outliers...............................................................................................................................................7
Statistics we use in CrXPRT results calculations ................................................................................................7
Calculating the results confidence interval........................................................................................................8
Sample detailed calculation of the expected battery life......................................................................................9
The performance score........................................................................................................................................11
After running CrXPRT...............................................................................................................................................13
Comparing results to the database..................................................................................................................13
Submitting results............................................................................................................................................13
About the BenchmarkXPRT Family ..........................................................................................................................13
The community model.........................................................................................................................................14
Where can I get more information? ....................................................................................................................14
What is the BenchmarkXPRT Development Community?.......................................................................................14
Conclusion................................................................................................................................................................14
About Principled Technologies ................................................................................................................................15
3. A Principled Technologies white paper 3Exploring CrXPRT 2015
INTRODUCTION
This paper explains the concepts behind CrXPRT 2015. CrXPRT 2015 is a tool created to evaluate the
performance and battery life of Chrome OS based devices—Chromebooks. Like the other XPRT family benchmarks, it is
easy to use, runs relatable workloads, and delivers easy-to-understand results.
CrXPRT can complete a performance test in 20 minutes and a battery life test in approximately 3-1/2 hours.
Even allowing for additional time to set up the device and charge its battery, you can get a complete set of results easily
in a workday. Optional tests such as a battery rundown test that completely discharges the battery take longer to run.
We explain the development guidelines common to all BenchmarkXPRT tools in general, as well as the specific
goals and assumptions of CrXPRT 2015. Finally, we discuss the structure of the tests, how CrXPRT calculates results, how
to share results, and how to participate in the BenchmarkXPRT community.
DEVELOPMENT PROCESS
We use a unique design methodology. Instead of the closed, bottom-up approach used by many benchmarking
efforts, we are using an open, top-down approach that includes the Development Community. Our approach starts by
taking input from the community and examining the most common use cases. We then write a Request for Comment
(RFC) proposing use cases to incorporate into the application. Once we have the RFC written, we publish it to the
community.
The community’s input on the RFC guides the drafting of a design document. The design document drives the
implementation of the community preview, which we release to the community for input. We make changes based on
community input from the preview period and finalize the code to create a general release.
CRXPRT 2015: THE DETAILS
CrXPRT 2015 is a tool for evaluating the performance and battery life of Chromebooks. You have the option of
running in Performance Test Mode or Battery Life Mode.
As with BatteryXPRT, the CrXPRT battery life tests complete in under a workday, running for 3-1/2 hours at its
default settings. Even allowing for additional time to set up the device and charge its battery, you can run the battery life
test and the performance test and get a complete set of results easily in well under a workday. Optional tests, such as a
battery rundown test that completely discharges the battery, take longer to run.
That 3-1/2-hour battery life test length is less than the battery life of most batteries. The battery life estimation
assumes that the battery under test, for the bulk of its usable life, discharges power in an approximately linear fashion.
This is a reasonable assumption for most modern Li-ion batteries.1
CrXPRT tracks battery levels during the test and uses
those results to estimate battery life.
About the performance test
The performance test measures the speed of the Chromebook. Four of the seven performance scenarios
(workloads) in the CrXPRT performance test are adapted from the WebXPRT 2013 framework. These workloads mirror
1
www.technick.net/public/code/cp_dpage.php?aiocp_dp=guide_bpw2_c02_06
4. A Principled Technologies white paper 4Exploring CrXPRT 2015
the kinds of things you do on the Internet every day and include these HTML5- and JavaScript-based workloads: Photo
Effects, Face Detection (JavaScript), Stocks Portfolio Dashboard, and Offline Notes. CrXPRT adds DNA Sequence Analysis
using HTML5 Web Worker and JavaScript, 3D Shapes using WebGL technologies, and Photo Collage using Portable
Native Client (PNaCl). Table 1 provides further details about each workload.
CrXPRT runs the seven-workload performance suite seven times before calculating the overall score. CrXPRT also
reports the individual workload scores. The test takes about 20 minutes to run.
About the battery life test
The battery life test includes all seven workloads from the performance suite, plus video playback, audio
playback, and HTML5 gaming scenarios. It also incorporates realistic periods of wait time, for a total of 11 components.
The 30-minute workload includes approximately 18 minutes of active workload and the remainder, about 12 minutes, of
wait/idle time. The combined work and idle time is always 30 minutes. However, because different devices work at
different speeds, the exact amount of idle time will vary by device with faster devices having more idle time and slower
devices less idle time. Table 1 describes the battery life workloads in more detail. By default, the test runs seven
iterations of the 30-minute battery life and takes 3 hours and 30 minutes to complete. You may also set the test to run
for 4.5 hours, 5.5 hours, or until the battery is exhausted. These extended tests require more iterations of the workload
than the standard seven-iteration test, but are the same as the standard test in all other respects.
CrXPRT performs two checks during a battery test to reduce the chance of losing time due to a faulty run. First, it
checks to make sure the Chromebook is not connected to power. Because checking during the active part of the test
might influence the results, CrXPRT checks after each iteration to make sure no one has plugged in the Chromebook
during the test. If the Chromebook is connected to power, CrXPRT will report an error and terminate the test.
The second check sees if the Chromebook was manually suspended during the test, as it would be if someone
closed the lid. CrXPRT does this by checking the total length of time the iteration took to complete. An iteration should
always take 30 minutes. However, if the device is suspended, the duration of the run will be longer. If CrXPRT detects
that an iteration lasted more than 33 minutes, it assumes that the device was suspended, generates an error, and ends
the test.
The battery life test mode produces an estimated battery life expressed in hours, a separate performance score,
individual performance scores for the performance workloads, and a frames-per-second (FPS) rate for a built-in HTML5
gaming component.
Test workloads
Table 1 describes the CrXPRT performance and battery life workloads. All seven workloads from the
performance test are included in the battery life test, but the battery life test also includes video playback, audio
playback, HTML5 gaming, and wait time components.
Workload Category Description
Performance
test
Battery life
test
Photo
Effects
HTML5 Canvas,
Canvas 2D, and
JavaScript
Measures the time to apply three effects
(Sharpen, Emboss, and Glow) to two photos each,
a set of six photos total
Face
Detection JS
HTML5 Canvas, Canvas
2D, and JavaScript
Measures the time it takes to check for human
faces in a set of five photos (low resolution)
5. A Principled Technologies white paper 5Exploring CrXPRT 2015
Workload Category Description
Performance
test
Battery life
test
Offline
Notes
HTML5 Local Storage,
JavaScript, AES
encryption and asm.js
Measures the time it takes to encrypt, store, and
display notes from local storage
Stock
Portfolio
Dashboard
HTML5 Canvas, SVG,
and JavaScript
Measures the time it takes to calculate and display
different graphical views of a stock portfolio
DNA
Sequence
Analysis
HTML5 Web Worker
and JavaScript
Measures the time it takes to process eight DNA
sequences for ORFs and amino acids
3D Shapes
with WebGL
WebGL-based Measures the time it takes to generate equation-
based 3D shapes and display them with WebGL
Photo
Collage
using PNaCl
PNaCl-based Measures the time it takes to apply the Sharpen
effect to four photos and combine them into a
collage (high resolution)
Video Player Browser-based video
playback
Plays a 2-minute 1,080p H.264 video clip in a
browser from the local system
-
Music
Player
Browser-based audio
playback
Plays an audio clip for 3 minutes
-
HTML5-
based game
HTML5 Canvas,
Canvas 2D, and
JavaScript
An impact.js-based game runs for about 2 minutes
-
Wait time N/A Device displays a wait page for the rest of the 30-
minute cycle
-
Table 1: CrXPRT performance and battery-life test workloads.
SCORING
The CrXPRT performance and battery life test modes produce different scores. The performance test mode
produces performance scores only; the battery life test reports battery life and performance scores.
The primary performance test mode result is the overall score that the benchmark calculates. For that score,
higher results are better. CrXPRT also reports individual results for the seven performance workloads. These results
report the average time (in milliseconds) that it took to run the workload. Because those results are times, lower scores
indicate faster performance and are better.
The primary battery-test result is the estimated battery life. For that score, higher results indicate longer battery
life and are better. The battery life test also produces performance score based on the seven performance suite
workloads, and a frames-per-second rate from the HTML5 gaming scenario. For both of these scores, higher results are
better. A detailed results page provides additional information including test’s exact duration, starting and ending
battery levels for each iteration, and the percentage of battery level drop for each iteration.
CrXPRT includes a margin of error with each score, a measure of the uncertainty of the score. CrXPRT reports the
battery life score as a fractional estimate in hours, plus or minus a percentage value, for example 7.73 hours +- 4%. The
CrXPRT performance score also reports an estimate plus a margin of error, for example 98 +-2%. Taken together, the
score and margin of error express a confidence interval at a 95% confidence level, a common level for this type
6. A Principled Technologies white paper 6Exploring CrXPRT 2015
calculation.2
You can interpret the results as saying that, if you were to repeat the test on the same system using the
same test procedures, you expect that, 95% of the time, the results would fall within the confidence interval CrXPRT
reports. For example, the results would mostly fall within the range of 96 – 100 for the 96 +- 2% result or within about
19 minutes (4%) on either side of the 7.73 battery life result.
Figure 1: Detailed results for a battery life test.
NOTE: Some Chromebooks report extremely low battery use for the first couple of iterations of the CrXPRT battery life
test, as low as 0%. This can cause CrXPRT to report results with a very wide confidence interval, one greater than 15%. At
the time of this writing, we are looking at ways to detect and compensate for this. If you see results with a very wide
confidence interval, we recommend that you use the rundown test.
The CrXPRT overall performance score and battery life performance result are based on the same seven
workloads and may be similar, but the two results are not interchangeable. The individual workloads run in different
contexts and under different conditions in the two modes of testing. The battery life test runs the performance
workloads in groups of three, with wait times and other tests interspersed between groups, while the performance test
runs the workloads sequentially and without breaks for other tasks or wait times. The performance test may run with
the device plugged in and with settings for that power state vs. with a possibly different on-battery power during the
battery life performance tests. The CrXPRT website results table
(principledtechnologies.com/benchmarkxprt/crxprt/2015/results) indicates in the test mode column whether the result
is from a battery life or performance mode tests. When comparing systems based on the CrXPRT performance scores,
always compare scores from the same test mode.
2
en.wikipedia.org/wiki/Confidence_interval Also, the Principled Technologies white paper WebXPRT 2013 results calculation and
confidence interval covers the confidence interval in detail.
7. A Principled Technologies white paper 7Exploring CrXPRT 2015
Scoring details
CrXPRT runs seven iterations of each workload. After it removes outliers (see below for details), the average
duration for each workload along with variance is displayed at 95% confidence level.
Excluding outliers
The CrXPRT performance test runs seven iterations of the seven-workload suite and measures the time it takes
to complete the workloads during each iteration. If one of the timed results for a workload stands out as being atypically
long, CrXPRT treats it as an outlier and discards it. CrXPRT then calculates the workload results and the overall score
using the remaining six results. We will discuss how we identify these outliers later.
With the default settings for the battery life test, CrXPRT runs seven 30-minute iterations and records the
amount of battery life remaining before and after each iteration. It calculates the battery-drop percentage during each
iteration. It discards high and low outliers among those seven percentages, if any, and uses the results that remain to
estimate the device’s typical battery life. If you run the battery test with more iterations, CrXPRT uses the same
procedure to determine outliers. In that case, more than seven iterations could be used for the calculations.
Statistics we use in CrXPRT results calculations
The CrXPRT calculations are straightforward. There are only a few concepts we use that are beyond basic
arithmetic and the confidence intervals that we discussed already. We briefly define those concepts below. A full
explanation of these terms is outside the scope of this paper, but the footnotes provide links to pages with more
information.
Interquartile range (IQR). The range of values that are in neither the top quartile nor the bottom quartile.
Sometimes called the “middle 50.” Note: There are several methods of calculating the IQR. We will show the
method CrXPRT uses in the calculations below.
Arithmetic mean or mean. What people typically mean by “average,” the sum of the values divided by the
number of values. In this document, the mean is always an arithmetic mean.3
Outliers. A value that is distant from other values. CrXPRT considers any value further than 1.5 times the
interquartile range (IQR) below the first quartile or above the third quartile to be an outlier.4
Standard deviation. A measure of how dispersed, or spread out, the set of data values is. The lower the
standard deviation, the more tightly clustered the data values are.5
Confidence interval. CrXPRT reports its results as a confidence interval expressed as a score +- a margin of
error using a 95% confidence level. The confidence level is the measure of certainty of the result.
Two-tailed Student’s t value. The t value determines the bounds within which the hypothesis (in this case
the performance score or battery life estimate) is true.6
For a performance test workload score, the
confidence interval on each side of the score (the 3% in the confidence interval 165 +-3%) has the t value as
one of its multipliers.
3
en.wikipedia.org/wiki/Mean#Arithmetic_mean_.28AM.29
4
en.wikipedia.org/wiki/Outlier
5
en.wikipedia.org/wiki/Standard_deviation
6
en.wikipedia.org/wiki/Student%27s_t-distribution
8. A Principled Technologies white paper 8Exploring CrXPRT 2015
Degrees of freedom. The t value is determined by the number of degrees of freedom in the results. Degrees
of freedom for these results is the number of sample test results minus 1.
Calculating the results confidence interval
CrXPRT uses the same results calculations for the performance scores as does WebXPRT. We discuss confidence
intervals and confidence levels in some detail in a WebXPRT paper:
principledtechnologies.com/benchmarkxprt/whitepapers/2013/WebXPRT-2013_calculation.pdf.
To compute the confidence intervals at the 95% confidence level, CrXPRT uses the two-tailed Student’s t value.7
For these calculations to be valid, the number of degrees of freedom (the number of sample test results used in the
calculation minus 1) must exceed the t value for that number of degrees of freedom.
Figure 2 shows 10 values of the two-tailed Student’s t value at the 95% confidence level.8
It is only when you
reach the four degrees of freedom, indicated by the first dark vertical line, that the degrees of freedom exceed the
calculated t value. For example in a standard seven-iteration run of a test that could exclude both high and low outliers,
the number of iterations could be five, six, or seven. Because the number of degrees of freedom is always one less than
the number of iterations, that means the degrees of freedom would be, respectively, four, five, or six.
Figure 2: Student’s t value for 1 to 10 degrees of freedom.
In the chart above, the higher the t value, indicated by the blue line, the less reliable the estimate will be. As you
can see, the reliability is very flat above seven degrees of freedom, which is why seven iterations is the default for
CrXPRT. There is a small but acceptable loss of reliability between four and seven degrees of freedom. However, below
four degrees of freedom, the confidence goes down rapidly. At three degrees of freedom or lower, the CI becomes so
wide that the confidence in the result would be very low and there is the possibility of a much higher variability of the
score.
7
en.wikipedia.org/wiki/Student%27s_t-distribution
8
www.sjsu.edu/faculty/gerstman/StatPrimer/t-table.pdf
0
2
4
6
8
10
12
14
1 2 3 4 5 6 7 8 9 10
tvalue
Degrees of freedom
Two-tailed Student’s t value, 95% confidence
9. A Principled Technologies white paper 9Exploring CrXPRT 2015
Sample detailed calculation of the expected battery life
CrXPRT calculates its battery-life results the same way as does BatteryXPRT 2014 for Android. The test measures
and reports the starting and ending battery capacity levels for each iteration and subtracts them to calculate the drop in
battery capacity during each iteration. Based on those results, CrXPRT estimates the time it would take for the battery to
drop from 100% to 0%.
Table 2 shows the battery use during each iteration of a seven-iteration CrXPRT test on one test system. That
test system saw a battery level drop of 38.8 percent, from a 100% charge to a 61.2% charge over the seven iterations
and 3-1/2-hour time of this test. You can see the details for this run at
www.principledtechnologies.com/benchmarkxprt/crxprt/2015/details.php?resultid=764
Iteration Starting battery level (%) Ending battery level (%) Battery drop (%)
1 100 100 0
2 100 94 6
3 94 87.6 6.4
4 87.6 81.2 6.4
5 81.2 74.7 6.5
6 74.7 67.9 6.8
7 67.9 61.2 6.7
Table 2: Battery use during seven iterations of CrXPRT.
The calculation of the expected battery life proceeds as shown in Table 3.
Step Result
Step 1: Identify and exclude any outliers within the battery-drop (%) results
A: Sort results. Column 1 in Table 4 shows the sorted battery-drop results.
B: Calculate bottom quartile, Q1, of battery-drop (%) results. CrXPRT uses the median of the lower half of the
results for the first quartile (results 1-3). With seven results, Q1, the median of the lowest three results will
always be the second sorted result
6
C: Calculate top quartile, Q3, of battery-drop (%) results. CrXPRT uses the median of the upper half of the
results for the first quartile (results 1-3). With seven results, Q1, the median of the highest three results will
always be the sixth sorted result
6.7
D: Calculate interquartile range. (IQR=Q3-Q1). 0.7
E: Calculate the lower outlier cutoff using the formula Q1 - IQR*1.5. Results far outside the bounds of Q1 and
Q2 are discarded as outliers and excluded from further calculations. CrXPRT defines an outlier as a result that
is more than 1.5 times the interquartile range (IQR) below the first quartile (Q1) or above the third quartile
(Q3). With seven values and because the IQR is based on values 2 through 6, the only values that could
possibly be outliers from that IQR are the first and seventh sorted values.
4.95
F: Calculate higher outlier cutoff (Q3 + IQR*1.5) 7.75
G: Discard the lowest sorted result if it is below the lower cutoff. For this data set, we drop iteration 1
because it falls below the lower cutoff.
YES
H: Discard the highest sorted result if it is higher than the higher cutoff. No results in this data set fall above
the higher cutoff, so we do not discard any high results.
NO
J: Count the remaining, included runs. Table 4, column B shows included runs. 6
Table 3: Calculating the outliers.
10. A Principled Technologies white paper 10Exploring CrXPRT 2015
Step Result
Sorted battery drop (%) Battery drop (%) excluding outlier
0.0
6.0 6.0
6.4 6.4
6.4 6.4
6.5 6.5
6.7 6.7
6.8 6.8
Table 4: Sorted results before and after discarding outlier.
We exclude the outliers in further calculations. Table 5 continues the calculations leading to the final expected
battery life and confidence interval results.
Step 2: Calculate average battery drop for the included runs (from Column B in table 4) 6.466
Step 3: Calculate the confidence interval
A: Calculate the variance based on the entire population of included runs (calculated the same as the
Excel VAR.P function). 0.0655
B: Calculate the standard deviation of the included battery-drop values (the square root of the variance). 0.256
C: t value for 95% confidence range. The Excel equivalent is TINV(0.05,5) rounded to three decimal
places, with 5 being the number of remaining results. 2.571
D: Calculate the 95% Confidence interval delta for the iterations (t value * standard
deviation/sqrt(count)). 0.268
E: Calculate minutes for the overall margin of error (+- value) (100 * 30-minute cycle duration *
confidence interval delta/ (average battery drop squared). 19.279
Step 4: Calculate estimated battery life
A: Calculate the number of expected 30-minute iterations based on the average battery drop (100% /
average percent drop% for included values). 15.464
B: Calculate estimated battery life (rounded to minutes) (expected iterations multiplied by the 30-minute
length of the iterations).
464
C: Convert estimated battery life to hours (estimated battery life / 60). 7.733
D: Convert confidence interval delta to a rounded percentage (confidence interval delta / estimated
battery life). 4%
Step 5: Report result, including battery life in hours and the estimated+- margin of error percentage
Expected
battery life
= 7.73
hours +- 4%
Table 5: Calculating the battery life result.
The calculations are the same for tests with more iterations, though with more iterations, more than one outlier
might be removed from the top or bottom. Any values in the bottom or top 25% could be outliers.
With the rundown battery life test, CrXPRT calculates an estimated battery life for a 100% battery drain if the
system shuts down with more than 0% battery life remaining. This calculation is different from the one for the fixed-
time workloads. We calculate a result of hours of battery life by dividing the lapsed time (in minutes) by 100, multiplying
11. A Principled Technologies white paper 11Exploring CrXPRT 2015
that result by the percentage battery life used during the test, then converting the result to hours. A rundown test for
the same system in the previous example ran for 7.65 hours until the system shut down with 1.7% remaining battery
charge. CrXPRT estimates 7.78 hours battery life for that system. CrXPRT does not report a margin of error for the
rundown test because the result involves minimal estimation.
The performance score
CrXPRT derives the per-iteration performance scores from the timings of the individual workloads. You can
access those timings through CrXPRT’s Previous Results screen. To access the workload timings, follow these steps:
1. Navigate to the Previous Results page in CrXPRT.
2. Click the Details link beside the result you wish to analyze.
3. After the Result Details windows appears, press Alt+D.
4. A new window will appear showing the raw workload timings, along with other test information.
5. Copy the raw workload timings to a text file and save the file locally.
It is important to note that the detailed results timings may be less than the timings you would get by
subtracting the end time from the start time in the detailed results. That is because the use cases for the battery-test
contain some idle time and fixed-rate work, to better mimic real-world behavior. The time spent resting or working at a
fixed rate is not included in the performance calculations.
CrXPRT calculates its performance results exactly the same way as WebXPRT. We have covered that process in in
the white paper WebXPRT 2013 results calculation and confidence interval.9
We refer you to that paper for the details
about the calculations.
This paper has a companion spreadsheet,10
which shows all the steps for calculating a CrXPRT result for a CrXPRT
performance test. A performance test produces seven results for each of the test workloads during a 20-minute test run.
We use similar calculations for the performance result tied to the battery life test, but that result includes more data
points. Each half-hour iteration of the battery life test runs the set of performance workloads three times. A 3-1/2-hour,
seven-iteration test produces 21 results for each test workload, all of which are included in the results calculations.
Longer tests produce even more results. A second companion spreadsheet11
shows the calculations for a 3-1/2-hour
test.
We provide a brief summary here of the calculations for the overall performance score and the individual
workload results for it. These results in this example are for this set of results for a Chromebook 14 14-q070nr:
www.principledtechnologies.com/benchmarkxprt/crxprt/2015/details.php?resultid=823
CrXPRT calculates results for each of the seven performance test workloads based on the time that it took to
complete the timed portion of the workload during the seven test iterations. Column 1 in Table 6 shows one set of
results for the Photo Effects tests, sorted. Columns 3 and 4 show the results calculations. The first calculations identify
any outliers. Column 2 shows the results excluding the outlier identified for these results. For the performance tests, we
look only at upper outliers, including all results below the cutoff for those results.
9
principledtechnologies.com/benchmarkxprt/whitepapers/2013/WebXPRT-2013_calculation.pdf
10
http://www.principledtechnologies.com/benchmarkxprt/whitepapers/2015/CrXPRT-2015-overall-result-calculation.xlsx
11
http://www.principledtechnologies.com/benchmarkxprt/whitepapers/2015/CrXPRT-2015-result-calculation-for-battery-
life-test-performance-scores.xlsx
12. A Principled Technologies white paper 12Exploring CrXPRT 2015
Results for seven
iterations sorted
(ms)
Results
excluding any
outliers (ms)
Calculations
Calculation
results
409 409 First quartile 416
416 416 Third quartile 434
418 418 Interquartile range 18
421 421 Outlier upper cutoff (This omits the 545 result) 461
426 426
Sample variance, [calculations from here on exclude
outliers]. The Excel equivalent is the Var.S function.
74.267
434 434
Sample standard deviation (the square root of the
variance)
8.618
545 Count (The number of remaining results) 6
t value for 95% confidence range. The Excel equivalent is
TINV(0.05,5) rounded to three decimal places, with 5
being the number of remaining results.
2.571
Mean (average of the results) 420.667
Standard error (standard deviation divided by the square
root of the count.
3.518
The radius of confidence interval (equal to the t value *
standard deviation/square root of the count)
9.045
Expressed as a percentage (CI/Mean) 2%
Reported score: average result and confidence interval 421 +- 2%
Table 6: Calculations for an individual workload performance test result.
The reported result for this example would be the rounded mean and the margin of error. The reported result
for this set of results would be 421 +-2%. These results are times and margins of error, so lower scores and smaller
margins of error are better.
Like the other members of the BenchmarkXPRT family, CrXPRT uses a calibration system. The calibration system
for CrXPRT is the Acer C720-2848 Chromebook running Chrome v34, and you will find timings for the calibration system
in the js/statistics.js file. The variable _perfTaskCalibBase stores the calibration data. The spreadsheets accompanying
this white paper include those calibration values.
The overall score does not express a time, but is instead the result of a calculation based on the ratio of the
individual workload scores of the system under test to those of the calibration system. Through this comparison, it
converts the result to a relative measure of performance so that, like the other XPRT benchmarks higher results are
better.
The calculations for the performance result for the battery life test uses the same calculations but on a larger set
of 21 or more results for each workload. We provide a spreadsheet example for its calculations as well.12
12
http://www.principledtechnologies.com/benchmarkxprt/whitepapers/2015/CrXPRT-battery-life-calculations.xlsx
13. A Principled Technologies white paper 13Exploring CrXPRT 2015
AFTER RUNNING CRXPRT
Comparing results to the database
You can view results for CrXPRT 2015 at www.principledtechnologies.com/benchmarkxprt/crxprt/2015/results.
To find detailed information on any set of scores, click the link under the Source column.
Submitting results
CrXPRT 2015 allows you to submit results to Principled Technologies. Simply click the Submit Result button on
the result screen. Fill in a contact email, device name, and model number. The address and comment fields are optional.
Then, all you have to do is click Submit.
Figure 3: Submitting results after a run.
ABOUT THE BENCHMARKXPRT FAMILY
The BenchmarkXPRT tools are a set of apps that help you test how well devices do the kinds of things you do
every day. In addition to CrXPRT 2015, the BenchmarkXPRT suite currently comprises the following tools:
BatteryXPRT 2014 for Android, an app to measure the battery life of Android-based phones and tablets
MobileXPRT, an app to test the responsiveness of Android devices
TouchXPRT, an app to test the responsiveness of Windows 8 and Windows RT devices
WebXPRT, an online tool to test the Web browsing capabilities of any device with Internet access
HDXPRT, a program that uses commercial applications to test the capabilities and responsiveness of PCs
We designed the apps to test a wide range of devices on a level playing field. When you look at results from
XPRTs, you get unbiased, fair product comparison information.
14. A Principled Technologies white paper 14Exploring CrXPRT 2015
The community model
We built BenchmarkXPRT around a unique community model. Community membership is open to anyone, and
there are many different ways to participate.
Members of the BenchmarkXPRT Development Community are involved in every step of the process. They give
input on the design of upcoming versions, contribute source code, and help test the resulting implementation.
Community members have access to the source code and access to early releases in the form of community previews.
The community helps us avoid the ivory tower syndrome. Diversity of input during the design process makes the
tests more representative of real world activity. Giving community members access to the source code both improves
the implementation of the design and increases confidence in the code.
The community model differs from the open source model primarily by controlling derivative works. It is
important that the BenchmarkXPRT benchmarks return consistent results. If the testing community calls different
derivative works by the same name, the result would be that the test results would not be comparable. That would limit,
if not destroy, the tools’ effectiveness.
Where can I get more information?
Visit us at CrXPRT.com or follow us on Twitter and Facebook. We announce breaking news on the
BenchmarkXPRT blog (available to everyone) and the BenchmarkXPRT forums (available to members only). If you cannot
find the answer to your question, or if you need help with CrXPRT, send an email to our team at
BenchmarkXPRTsupport@principledtechnologies.com.
WHAT IS THE BENCHMARKXPRT DEVELOPMENT COMMUNITY?
The BenchmarkXPRT Development Community is a forum where registered members can contribute to the
process of creating and improving the BenchmarkXPRT family, including CrXPRT. If you are not currently a community
member, we encourage you to join! (Yes, that means you – our community is open to everyone, from software
developers to interested consumers.) Not only will you get early releases of future versions of CrXPRT, but you will also
be able to download the source code (available to members only) and influence the future of the app. Register now, or
for more information, see the BenchmarkXPRT FAQ.
CONCLUSION
We hope this paper has answered any questions you may have about CrXPRT. If you have any other questions,
or if you have suggestions on ways to improve CrXPRT, please post them on the community forum or e-mail us at
BenchmarkXPRTsupport@principledtechnologies.com. For more information, visit us at BenchmarkXPRT.com and
CrXPRT.com.
15. A Principled Technologies white paper 15Exploring CrXPRT 2015
ABOUT PRINCIPLED TECHNOLOGIES
Principled Technologies, Inc.
1007 Slater Road, Suite 300
Durham, NC, 27703
www.principledtechnologies.cm
We provide industry-leading technology assessment and fact-based marketing
services. We bring to every assignment extensive experience with and expertise
in all aspects of technology testing and analysis, from researching new
technologies, to developing new methodologies, to testing with existing and new
tools.
When the assessment is complete, we know how to present the results to a
broad range of target audiences. We provide our clients with the materials they
need, from market-focused data to use in their own collateral to custom sales
aids, such as test reports, performance assessments, and white papers. Every
document reflects the results of our trusted independent analysis.
We provide customized services that focus on our clients’ individual
requirements. Whether the technology involves hardware, software, Web sites,
or services, we offer the experience, expertise, and tools to help our clients
assess how it will fare against its competition, its performance, its market
readiness, and its quality and reliability.
Our founders, Mark L. Van Name and Bill Catchings, have worked together in
technology assessment for over 20 years. As journalists, they published over a
thousand articles on a wide array of technology subjects. They created and led
the Ziff-Davis Benchmark Operation, which developed such industry-standard
benchmarks as Ziff Davis Media’s Winstone and WebBench. They founded and
led eTesting Labs, and after the acquisition of that company by Lionbridge
Technologies were the head and CTO of VeriTest.
Principled Technologies is a registered trademark of Principled Technologies, Inc.
All other product names are the trademarks of their respective owners.
Disclaimer of Warranties; Limitation of Liability:
PRINCIPLED TECHNOLOGIES, INC. HAS MADE REASONABLE EFFORTS TO ENSURE THE ACCURACY AND VALIDITY OF ITS TESTING, HOWEVER, PRINCIPLED
TECHNOLOGIES, INC. SPECIFICALLY DISCLAIMS ANY WARRANTY, EXPRESSED OR IMPLIED, RELATING TO THE TEST RESULTS AND ANALYSIS, THEIR ACCURACY,
COMPLETENESS OR QUALITY, INCLUDING ANY IMPLIED WARRANTY OF FITNESS FOR ANY PARTICULAR PURPOSE. ALL PERSONS OR ENTITIES RELYING ON THE
RESULTS OF ANY TESTING DO SO AT THEIR OWN RISK, AND AGREE THAT PRINCIPLED TECHNOLOGIES, INC., ITS EMPLOYEES AND ITS SUBCONTRACTORS SHALL
HAVE NO LIABILITY WHATSOEVER FROM ANY CLAIM OF LOSS OR DAMAGE ON ACCOUNT OF ANY ALLEGED ERROR OR DEFECT IN ANY TESTING PROCEDURE OR
RESULT.
IN NO EVENT SHALL PRINCIPLED TECHNOLOGIES, INC. BE LIABLE FOR INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH ITS
TESTING, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. IN NO EVENT SHALL PRINCIPLED TECHNOLOGIES, INC.’S LIABILITY, INCLUDING FOR DIRECT
DAMAGES, EXCEED THE AMOUNTS PAID IN CONNECTION WITH PRINCIPLED TECHNOLOGIES, INC.’S TESTING. CUSTOMER’S SOLE AND EXCLUSIVE REMEDIES ARE
AS SET FORTH HEREIN.