Software functional testing can unveil a wide range of potential malfunctions in applications. However, there is a significant fraction of errors that will be hardly detected through a traditional testing process. Problems such as memory corruptions, memory leaks, performance bottlenecks, low-level system call failures and I/O errors might not surface any symptoms in a tester’s machine while causing disasters in production. On the other hand, many handy tools have been emerging in all popular platforms allowing a tester or an analyst to monitor the behavior of an application with respect to these dark areas in order to identify potential fatal problems that would go unnoticed otherwise. Unfortunately, these tools are not yet in widespread use due to few reasons. First, the usage of tools requires a certain amount of expertise on system internals. Furthermore, these monitoring tools generate a vast amount of data even with elegant filtering and thereby demand a significant amount of time for an analysis even from experts. As the end result, using monitoring tools to improve software quality becomes a costly operation. Another facet of this problem is the lack of infrastructure to automate recurring analysis patterns.
This paper describes the current state of an ongoing research in developing a framework that automates a significant part of the process of monitoring various quality aspects of a software application with the utilization of tools and deriving conclusions based on results. According to our knowledge this is the first framework to do this. It formulates infrastructure for analysts to extract relevant data from monitoring tool logs, process those data, make inferences and present analysis results to a wide range of stakeholders in a project.
#lspe Building a Monitoring Framework using DTrace and MongoDBdan-p-kimmel
A talk I gave at the Large Scale Production Engineering meetup at Yahoo! about building monitoring tools and how to use DTrace to get more out of your monitoring data.
By Maaike Kempes, Dutch WASH Alliance. Prepared for the Monitoring sustainable WASH service delivery symposium, Addis Ababa, Ethiopia, 9-11 April 2013.
Monitoring and evaluation is a vital component that determines the effectiveness of a corporation's assistance by establishing clear links between past, present and future initiatives and results. The process helps in improving the programme performance and achieving desired results. It provides opportunities for fine-tuning, re-orientation and planning of the programme effectively, without which it becomes impossible to measure the success and impact of the programme even if the approach is right.
Software functional testing can unveil a wide range of potential malfunctions in applications. However, there is a significant fraction of errors that will be hardly detected through a traditional testing process. Problems such as memory corruptions, memory leaks, performance bottlenecks, low-level system call failures and I/O errors might not surface any symptoms in a tester’s machine while causing disasters in production. On the other hand, many handy tools have been emerging in all popular platforms allowing a tester or an analyst to monitor the behavior of an application with respect to these dark areas in order to identify potential fatal problems that would go unnoticed otherwise. Unfortunately, these tools are not yet in widespread use due to few reasons. First, the usage of tools requires a certain amount of expertise on system internals. Furthermore, these monitoring tools generate a vast amount of data even with elegant filtering and thereby demand a significant amount of time for an analysis even from experts. As the end result, using monitoring tools to improve software quality becomes a costly operation. Another facet of this problem is the lack of infrastructure to automate recurring analysis patterns.
This paper describes the current state of an ongoing research in developing a framework that automates a significant part of the process of monitoring various quality aspects of a software application with the utilization of tools and deriving conclusions based on results. According to our knowledge this is the first framework to do this. It formulates infrastructure for analysts to extract relevant data from monitoring tool logs, process those data, make inferences and present analysis results to a wide range of stakeholders in a project.
#lspe Building a Monitoring Framework using DTrace and MongoDBdan-p-kimmel
A talk I gave at the Large Scale Production Engineering meetup at Yahoo! about building monitoring tools and how to use DTrace to get more out of your monitoring data.
By Maaike Kempes, Dutch WASH Alliance. Prepared for the Monitoring sustainable WASH service delivery symposium, Addis Ababa, Ethiopia, 9-11 April 2013.
Monitoring and evaluation is a vital component that determines the effectiveness of a corporation's assistance by establishing clear links between past, present and future initiatives and results. The process helps in improving the programme performance and achieving desired results. It provides opportunities for fine-tuning, re-orientation and planning of the programme effectively, without which it becomes impossible to measure the success and impact of the programme even if the approach is right.
1. Intervention Process Checklist
Pre-referral Documentation (must be completed and submitted prior to initial SAT meeting):
_____ Tier I Implemented for at least 4 – 6 weeks
_____ Tier II Implementation for at least 4 – 6 weeks
_____ Documentation of effectiveness of the interventions
_____________________________________________________________________________
Initial Tier 3 Meeting (SAT)
Date: _______________
_____ Vision/Hearing Screening completed prior to meeting
Passed Vision: Yes No Passed Hearing: Yes No
_____ Parent Input Form
_____ Tier III Implemented
_____ Other documentation forms as necessary.
Action taken: ___ Continue Interventions ___ Modify Interventions
Follow-up SAT Meeting
Date: _______________
_____ Intervention Updates
_____ Other documentation forms as necessary.
Action taken: ___ Continue Interventions ___ Modify Interventions
___ Refer to EC ___ Exit Tier III
Follow-up SAT Meeting 2
Date: _______________
_____ Intervention Updates
_____ Other documentation forms as necessary.
Action taken: ___ Continue Interventions ___ Modify Interventions
___ Refer to SPED ___ Exit Tier III
EC Referral
_____ SAT Chairperson submits paperwork to EC – date: ________________