SIGUCCS 2013 ACM presentation.
Energy overhead of the graphical user interface in server operating systems. Heather Brotherton, J. Eric Dietz, John McGrory, and Fredrick Mtenzi. 2013. In Proceedings of the 2013 ACM annual conference on Special interest group on university and college computing services (SIGUCCS '13). ACM, New York, NY, USA, 65-68. DOI=10.1145/2504776.2504781 http://doi.acm.org/10.1145/2504776.2504781
I'm currently experiencing a strong cognitive dissonance, and it won't let me go. You see, I visit various programmers' forums and see topics where people discuss noble ideas about how to write super-reliable classes; somebody tells he has his project built with the switches -Wall -Wextra -pedantic -Weffc++, and so on. But, God, where are all these scientific and technological achievements? Why do I come across most silly mistakes again and again? Perhaps something is wrong with me?
Get Lower Latency and Higher Throughput for Java ApplicationsScyllaDB
Getting the best performance out of your Java applications can often be a challenge due to the managed environment nature of the Java Virtual Machine and the non-deterministic behaviour that this introduces. Automatic garbage collection (GC) can seriously affect the ability to hit SLAs for the 99th percentile and above.
This session will start by looking at what we mean by speed and how the JVM, whilst extremely powerful, means we don’t always get the performance characteristics we want. We’ll then move on to discuss some critical features and tools that address these issues, i.e. garbage collection, JIT compilers, etc. At the end of the session, attendees will have a clear understanding of the challenges and solutions for low-latency Java.
How Netflix Tunes EC2 Instances for PerformanceBrendan Gregg
CMP325 talk for AWS re:Invent 2017, by Brendan Gregg. "
At Netflix we make the best use of AWS EC2 instance types and features to create a high performance cloud, achieving near bare metal speed for our workloads. This session will summarize the configuration, tuning, and activities for delivering the fastest possible EC2 instances, and will help other EC2 users improve performance, reduce latency outliers, and make better use of EC2 features. We'll show how we choose EC2 instance types, how we choose between EC2 Xen modes: HVM, PV, and PVHVM, and the importance of EC2 features such SR-IOV for bare-metal performance. SR-IOV is used by EC2 enhanced networking, and recently for the new i3 instance type for enhanced disk performance as well. We'll also cover kernel tuning and observability tools, from basic to advanced. Advanced performance analysis includes the use of Java and Node.js flame graphs, and the new EC2 Performance Monitoring Counter (PMC) feature released this year."
Curvature's Testing Environment (CuTE) for ServersCurvature
Curvature’s Testing Environment, otherwise known as CuTE, is an Operating System that allows Curvature engineers to
efficiently and reliably test servers and storage equipment.
I'm currently experiencing a strong cognitive dissonance, and it won't let me go. You see, I visit various programmers' forums and see topics where people discuss noble ideas about how to write super-reliable classes; somebody tells he has his project built with the switches -Wall -Wextra -pedantic -Weffc++, and so on. But, God, where are all these scientific and technological achievements? Why do I come across most silly mistakes again and again? Perhaps something is wrong with me?
Get Lower Latency and Higher Throughput for Java ApplicationsScyllaDB
Getting the best performance out of your Java applications can often be a challenge due to the managed environment nature of the Java Virtual Machine and the non-deterministic behaviour that this introduces. Automatic garbage collection (GC) can seriously affect the ability to hit SLAs for the 99th percentile and above.
This session will start by looking at what we mean by speed and how the JVM, whilst extremely powerful, means we don’t always get the performance characteristics we want. We’ll then move on to discuss some critical features and tools that address these issues, i.e. garbage collection, JIT compilers, etc. At the end of the session, attendees will have a clear understanding of the challenges and solutions for low-latency Java.
How Netflix Tunes EC2 Instances for PerformanceBrendan Gregg
CMP325 talk for AWS re:Invent 2017, by Brendan Gregg. "
At Netflix we make the best use of AWS EC2 instance types and features to create a high performance cloud, achieving near bare metal speed for our workloads. This session will summarize the configuration, tuning, and activities for delivering the fastest possible EC2 instances, and will help other EC2 users improve performance, reduce latency outliers, and make better use of EC2 features. We'll show how we choose EC2 instance types, how we choose between EC2 Xen modes: HVM, PV, and PVHVM, and the importance of EC2 features such SR-IOV for bare-metal performance. SR-IOV is used by EC2 enhanced networking, and recently for the new i3 instance type for enhanced disk performance as well. We'll also cover kernel tuning and observability tools, from basic to advanced. Advanced performance analysis includes the use of Java and Node.js flame graphs, and the new EC2 Performance Monitoring Counter (PMC) feature released this year."
Curvature's Testing Environment (CuTE) for ServersCurvature
Curvature’s Testing Environment, otherwise known as CuTE, is an Operating System that allows Curvature engineers to
efficiently and reliably test servers and storage equipment.
Hybrid CPU GPU MATLAB Image Processing BenchmarkingDimitris Vayenas
An attempt to quantify the substantial performance improvement observed on Windows 8.1\ Nvidia GTX 780M\Intel HD 4600 via the latest NVIDIA Driver (326.01) that may help other users - particularly of the MATLAB Image Processing and Parallel Computing Toolboxes - to consider upgrading...
VMworld 2013: A Technical Deep Dive on VMware Horizon View 5.2 Performance an...VMworld
VMworld 2013
Banit Agrawal, VMware
Warren Ponder, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Can the performance of a computer system be increased through overclocking such that the percentage gain of work performed is greater than the percentage increase of electricity consumed?
This report was prepared based on my own experiance and for sleflearning , this is report is not to give you or guide to take decision . This benchmark report has been created to high you a rough idea about the Alibaba basic components. Taking this report benchmark for a comparison of other cloud competitors is at own user Risk.
In this deck from the Performance Optimisation and Productivity group, Lubomir Riha from IT4Innovations presents: Energy Efficient Computing using Dynamic Tuning.
"We now live in a world of power-constrained architectures and systems and power consumption represents a significant cost factor in the overall HPC system economy. For these reasons, in recent years researchers, supercomputing centers and major vendors have developed new tools and methodologies to measure and optimize the energy consumption of large-scale high performance system installations. Due to the link between energy consumption, power consumption and execution time of an application executed by the final user, it is important for these tools and the methodology used to consider all these aspects, empowering the final user and the system administrator with the capability of finding the best configuration given different high level objectives.
This webinar focused on tools designed to improve the energy-efficiency of HPC applications using a methodology of dynamic tuning of HPC applications, developed under the H2020 READEX project. The READEX methodology has been designed for exploiting the dynamic behaviour of software. At design time, different runtime situations (RTS) are detected and optimized system configurations are determined. RTSs with the same configuration are grouped into scenarios, forming the tuning model. At runtime, the tuning model is used to switch system configurations dynamically.
The MERIC tool, that implements the READEX methodology, is presented. It supports manual or binary instrumentation of the analysed applications to simplify the analysis. This instrumentation is used to identify and annotate the significant regions in the HPC application. Automatic binary instrumentation annotates regions with significant runtime. Manual instrumentation, which can be combined with automatic, allows code developer to annotate regions of particular interest."
Watch the video: https://wp.me/p3RLHQ-lJP
Learn more: https://pop-coe.eu/blog/14th-pop-webinar-energy-efficient-computing-using-dynamic-tuning
and
https://code.it4i.cz/vys0053/meric
Sign up for our insideHPC Newsletter: http://insidehpc.com/newslett
Comparing CPU and memory performance: Red Hat Enterprise Linux 6 vs. Microsof...Principled Technologies
Understanding how your system resources are utilized and how well they perform can be extremely valuable as you plan your infrastructure, making the selection of the operating system a pivotal decision that could influence your IT strategy for many years to come. Throughout our CPU and RAM tests, we found that the open-source Red Hat Enterprise Linux 6 solution performed as well or better than Microsoft Windows Server 2012. In our SPEC CPU2006 tests, the Red Hat Enterprise Linux 6 solution achieved consistently higher scores than the Windows Server 2012 solution. When we used the LINPACK benchmark to test floating point performance of CPUs, we also found that tuning the operating system allowed us to get even greater performance out of the Red Hat Enterprise Linux 6 system. In our memory bandwidth tests, the Red Hat Enterprise Linux 6 solution outperformed the Windows Server 2012 solution at mid-range thread counts.
By choosing an operating system that can deliver strong performance on all subsystems out of the box and increase performance even more when tuned, you can ensure that you are giving your applications the necessary resources to perform well and providing your organization with a solid foundation for future growth.
The Microarchitecure Of FPGA Based Soft ProcessorDeepak Tomar
this presentation is on the Paper "The Microarchitecure Of FPGA Based Soft Processor" by Peter Yiannacouras, Jonathan Rose and
J Gregory Steffan
Dept. of Electrical and Computer Engineering
University of Toronto
Large-Scale Optimization Strategies for Typical HPC Workloadsinside-BigData.com
In this deck from PASC 2019, Liu Yu from Inspur presents: Large-Scale Optimization Strategies for Typical HPC Workloads.
"Ensuring performance of applications running on large-scale clusters is one of the primary focuses in HPC research. In this talk, we will show our strategies on performance analysis and optimization for applications in different fields of research using large-scale HPC clusters. Our strategies are designed to comprehensively analyze runtime features of applications, parallel mode of the physical model, algorithm implementation and other technical details. This three levels of strategy covers platform optimization, technological innovation, and model innovation, and targeted optimization based on these features. State-of-the-art CPU instructions, network communication and other modules, and innovative parallel mode of some applications have been optimized. After optimization, it is expected that these applications will outperform their non-optimized counterparts with obvious increase in performance."
Watch the video: https://wp.me/p3RLHQ-kwB
Learn more: http://en.inspur.com/en/2403285/2403287/2403295/index.html
and
https://pasc19.pasc-conference.org/program/keynote-presentations/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Windows server power_efficiency___robben_and_worthington__finalBruce Worthington
Computer Measurement Group Journal, Spring 2009.
Windows Server power efficiency has improved from release to release over the past decade. This paper presents the methodology and data used to validate the existing Windows Server power management algorithms, covers server-class processor and component power measurements, and discusses some Windows’ power measurement tools and future power optimizations.
Benchmarking Performance: Benefits of PCIe NVMe SSDs for Client WorkloadsSamsung Business USA
The transition from Serial ATA (SATA ) to Peripheral Component Interconnect Express (PCIe) interface and Non-Volatile Memory Express (NVMe) protocol is taking client storage to a new level. This white paper discusses the benefits that PCIe NVMe SSDs, such as Samsung’s 950 PRO, bring to client PC users. Client PC workloads are not always well understood in the industry, since common benchmarking utilities tend to focus on measuring maximum performance rather than performance under typical PC usage. This white paper looks at actual IO traces of PC workloads to better understand how client SSDs should be benchmarked, and also tests the 950 PRO against other Samsung SSDs to show how PCIe and NVMe improve IO performance in tests that represent real-world IO activity.
Hybrid CPU GPU MATLAB Image Processing BenchmarkingDimitris Vayenas
An attempt to quantify the substantial performance improvement observed on Windows 8.1\ Nvidia GTX 780M\Intel HD 4600 via the latest NVIDIA Driver (326.01) that may help other users - particularly of the MATLAB Image Processing and Parallel Computing Toolboxes - to consider upgrading...
VMworld 2013: A Technical Deep Dive on VMware Horizon View 5.2 Performance an...VMworld
VMworld 2013
Banit Agrawal, VMware
Warren Ponder, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Can the performance of a computer system be increased through overclocking such that the percentage gain of work performed is greater than the percentage increase of electricity consumed?
This report was prepared based on my own experiance and for sleflearning , this is report is not to give you or guide to take decision . This benchmark report has been created to high you a rough idea about the Alibaba basic components. Taking this report benchmark for a comparison of other cloud competitors is at own user Risk.
In this deck from the Performance Optimisation and Productivity group, Lubomir Riha from IT4Innovations presents: Energy Efficient Computing using Dynamic Tuning.
"We now live in a world of power-constrained architectures and systems and power consumption represents a significant cost factor in the overall HPC system economy. For these reasons, in recent years researchers, supercomputing centers and major vendors have developed new tools and methodologies to measure and optimize the energy consumption of large-scale high performance system installations. Due to the link between energy consumption, power consumption and execution time of an application executed by the final user, it is important for these tools and the methodology used to consider all these aspects, empowering the final user and the system administrator with the capability of finding the best configuration given different high level objectives.
This webinar focused on tools designed to improve the energy-efficiency of HPC applications using a methodology of dynamic tuning of HPC applications, developed under the H2020 READEX project. The READEX methodology has been designed for exploiting the dynamic behaviour of software. At design time, different runtime situations (RTS) are detected and optimized system configurations are determined. RTSs with the same configuration are grouped into scenarios, forming the tuning model. At runtime, the tuning model is used to switch system configurations dynamically.
The MERIC tool, that implements the READEX methodology, is presented. It supports manual or binary instrumentation of the analysed applications to simplify the analysis. This instrumentation is used to identify and annotate the significant regions in the HPC application. Automatic binary instrumentation annotates regions with significant runtime. Manual instrumentation, which can be combined with automatic, allows code developer to annotate regions of particular interest."
Watch the video: https://wp.me/p3RLHQ-lJP
Learn more: https://pop-coe.eu/blog/14th-pop-webinar-energy-efficient-computing-using-dynamic-tuning
and
https://code.it4i.cz/vys0053/meric
Sign up for our insideHPC Newsletter: http://insidehpc.com/newslett
Comparing CPU and memory performance: Red Hat Enterprise Linux 6 vs. Microsof...Principled Technologies
Understanding how your system resources are utilized and how well they perform can be extremely valuable as you plan your infrastructure, making the selection of the operating system a pivotal decision that could influence your IT strategy for many years to come. Throughout our CPU and RAM tests, we found that the open-source Red Hat Enterprise Linux 6 solution performed as well or better than Microsoft Windows Server 2012. In our SPEC CPU2006 tests, the Red Hat Enterprise Linux 6 solution achieved consistently higher scores than the Windows Server 2012 solution. When we used the LINPACK benchmark to test floating point performance of CPUs, we also found that tuning the operating system allowed us to get even greater performance out of the Red Hat Enterprise Linux 6 system. In our memory bandwidth tests, the Red Hat Enterprise Linux 6 solution outperformed the Windows Server 2012 solution at mid-range thread counts.
By choosing an operating system that can deliver strong performance on all subsystems out of the box and increase performance even more when tuned, you can ensure that you are giving your applications the necessary resources to perform well and providing your organization with a solid foundation for future growth.
The Microarchitecure Of FPGA Based Soft ProcessorDeepak Tomar
this presentation is on the Paper "The Microarchitecure Of FPGA Based Soft Processor" by Peter Yiannacouras, Jonathan Rose and
J Gregory Steffan
Dept. of Electrical and Computer Engineering
University of Toronto
Large-Scale Optimization Strategies for Typical HPC Workloadsinside-BigData.com
In this deck from PASC 2019, Liu Yu from Inspur presents: Large-Scale Optimization Strategies for Typical HPC Workloads.
"Ensuring performance of applications running on large-scale clusters is one of the primary focuses in HPC research. In this talk, we will show our strategies on performance analysis and optimization for applications in different fields of research using large-scale HPC clusters. Our strategies are designed to comprehensively analyze runtime features of applications, parallel mode of the physical model, algorithm implementation and other technical details. This three levels of strategy covers platform optimization, technological innovation, and model innovation, and targeted optimization based on these features. State-of-the-art CPU instructions, network communication and other modules, and innovative parallel mode of some applications have been optimized. After optimization, it is expected that these applications will outperform their non-optimized counterparts with obvious increase in performance."
Watch the video: https://wp.me/p3RLHQ-kwB
Learn more: http://en.inspur.com/en/2403285/2403287/2403295/index.html
and
https://pasc19.pasc-conference.org/program/keynote-presentations/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Windows server power_efficiency___robben_and_worthington__finalBruce Worthington
Computer Measurement Group Journal, Spring 2009.
Windows Server power efficiency has improved from release to release over the past decade. This paper presents the methodology and data used to validate the existing Windows Server power management algorithms, covers server-class processor and component power measurements, and discusses some Windows’ power measurement tools and future power optimizations.
Benchmarking Performance: Benefits of PCIe NVMe SSDs for Client WorkloadsSamsung Business USA
The transition from Serial ATA (SATA ) to Peripheral Component Interconnect Express (PCIe) interface and Non-Volatile Memory Express (NVMe) protocol is taking client storage to a new level. This white paper discusses the benefits that PCIe NVMe SSDs, such as Samsung’s 950 PRO, bring to client PC users. Client PC workloads are not always well understood in the industry, since common benchmarking utilities tend to focus on measuring maximum performance rather than performance under typical PC usage. This white paper looks at actual IO traces of PC workloads to better understand how client SSDs should be benchmarked, and also tests the 950 PRO against other Samsung SSDs to show how PCIe and NVMe improve IO performance in tests that represent real-world IO activity.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
2. Introduction
This study will
Make a case for reducing use of the
graphical user interface
Avoid focus on a particular brand of
operating system
6. Example
If a PCI card such as video card were
removed for a savings of 41watts from 500
servers in a data center, the cumulative
watts saved would be 58,220 watts per year.
At an average of ten cents per kilowatt-hour
this results in a savings of$51,035.65 per
year.
8. Monitoring Tool
Watts up? Pro universal outlet version. This
meter is capable of measuring 100 to 250v
within a plus or minus 1.5 percent accuracy.
The meter is also capable of logging at onesecond intervals and provides a USB
interface and PC software
9. Linux Observations
Linux based server operating systems ran
the top command during the observations.
top -d 1 > /home/testOSName.txt
10. Windows Observations
Windows ran the Typeperf command line
tool during the observations configured to
provide much of the same information as
provided by top.
typeperf “MemoryAvailable bytes”
“processor(*)% processortime”
“Process(*)Thread Count” >
testOSName.csv
11. Hardware
Intel Atom D525 1.8GHz dual core processor
Integrated Intel Graphics Media Accelerator
3150
Gigabit LAN
SD card reader
5 USB connections
Fan-less external power supply
Intel Solid State Drive 80GB 320 Series
PNY 4GB PC3-10666 1.3GHz DDR3 SoDIMM
12. Server
Baseline watt consumption mean energy
consumed is 7.96 watts and the median is
8.70 watts.
After the addition of 4GB RAM to the server
during a one hour period is 15.36 watts and
the median is 15 watts.
After Solid State Drive (SSD) installation
was a mean of 17.42 watts and a median
consumption of 17.7 watts.
Baseline for the server of 17.42 to17.7 watts
13. Server Operating Systems
The software used for the testing were the
following x86 operating systems:
Ubuntu 11.10 (Linux)
Windows Server 2008 R2 Datacenter GUI
Windows Server 2008 R2 Datacenter
Core
16. Table explained
The mean number of threads:
GUI 365
Non-GUI 256
Difference approximately 109 threads
Indicates that a reduction of the ~100-thread
GUI overhead can save roughly one watt at
the server level.
17. FINDINGS
Operating systems that do not run a
graphical user interface (GUI) tested use
roughly 17.5 to 17.6 watts.
Graphical user interface (GUI) based
operating systems tested consumed 18.1 to
18.9 watts roughly.
Not using a GUI would save .6 to 1.3 watts
per server.
18. Conclusion
Savings of roughly 1 watt per server
Doesn’t seem like a big deal?
Maybe, but now you don’t need that video
card…
19. Math
(1watt GUI + 41 watt video card) 2.84
Cascade Effect = 119.28 watts
Hours in a year 8765.81
wattage x hours used ÷ 1000 x price
per kWh = cost of electricity
(119.28 x 8765.81 ÷ 1000) x .1 =
104.55858168
For 500 servers $52,279.29