State of application performance management in the Indian BFSI sector


Published on

Almost every participant in the BFSI sector identifies application
uptime as a critical metric of application performance and recognises
the need for those applications to function optimally i.e. increase
productivity while reducing costs. But this study showed that
organisations did not have defined standards of measurement and
did not consider industry benchmarks as relevant indicators.

Published in: Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

State of application performance management in the Indian BFSI sector

  1. 1. The state of application performance management in the Indian BFSI sector A fact-finding study by Anunta
  2. 2. Index Scope and methodology The APM Assessment The end-user conundrum What SLAs fail to measure Measuring The Business Impact Top 5 Survey Insights
  3. 3. What: Anunta Tech commissioned ValueNotes Research to study the state of application performance management, monitoring and measurement in the Indian BFSI Sector. The objective was to obtain qualitative insights into how BFSI companies are currently measuring the productivity and efficiency of their applications. This included understanding perceptions of the IT decision makers on key parameters such as: Are they measuring the productivity and efficiency of their application performance & delivery? How are they measuring application performance? What are the key SLAs around application performance? What kind of penalties and guarantees exist on current SLAs with vendors? Are they measuring the business impact of application performance? Who: The study polled senior level IT professionals of 34 BFSI organisations each with more than 500 end points (desktops, laptops and other mobile devices). 61% of the respondents were either IT heads, CTOs or CIOs while the balance consisted of VPs and GMs of IT. 47% of the respondents were from banks while 26% were from insurance companies. AMCs and brokerages accounted for the rest. How: ValueNotes conducted the survey over a period of 3 weeks through a combination of face-to-face and telephonic interviews. Scope and methodology • • • • • • • | 1
  4. 4. APM Challenges perceived by the BFSI sector Critical inferences Application uptime is the most crucial metric for application performance That said, many organizations do not have any set standards or benchmarks And existing defined metrics were not monitored regularly nor were they analyzed further to understand the effectiveness of application performance. Older banks and PSUs have a higher tolerance for downtime. Additionally, metrics tend to be flexible for remote areas where connectivity and bandwidth issues can cause higher levels of downtime. The APM Assessment | 2 Key Findings Lack of integration and scarcity of skilled resources + attrition are among the top factors impeding application delivery Efficient management of performance of critical applications and seamless delivery across the entire delivery chain was found to be a top priority. The challenges are redefining the way application performance management (APM) is being carried out. The increas- ing complexity of applications, applica- tion delivery architectures, widening geographical reach and constant up-gradation due to technological obsolescence are creating problems in monitoring application performance. • • • • • • •
  5. 5. Their Take The APM Assessment Anunta Take | 3 “You cannot blindly compare the industry benchmark and draw conclusions.” - Head-IT, Old Private Bank “We do not benchmark against industry standard, because we believe our own standard meets the needs”. - Head-IT, Old Private Bank “I do not believe in compar- ing ourselves with an indus- try benchmark”. - CIO, New Private Bank Almost every participant in the BFSI sector identifies application uptime as a critical metric of application performance and recognises the need for those applications to function optimally i.e. increase productivity while reducing costs. But this study showed that organisations did not have defined standards of measurement and did not consider industry benchmarks as relevant indicators.
  6. 6. End-user monitoring challenges The End User Conundrum | 4 Key Findings CTOs understand the importance of end user monitoring with 100% of CTOs feeling end-user monitoring is of critical importance. While a majority claimed to measure performance from an end user perspective only 47% claimed consensus between IT and end-user experience leading to some doubt as to whether the measurement was in fact happening at end-user level. End user feedback, incident reporting and problem solving are the metrics employed to capture the end user experience indicating a reactive approach towards monitoring with only 15% of those polled saying that they took a proactive approach to end-user monitoring. • • • Critical inferences Application performance from end-user perspective is reactive and not linked to business metrics. Given the lack of specificity around measurement metrics, most results are vague and based primarily on end-user feedback. • •
  7. 7. Their Take The End User Conundrum Anunta Take | 5 “We measure end user performance by feedback, questionnaires, branch visits - but these are on a random basis.” - CIO, Public Sector Bank “We do not have any metrics for measuring end user experience, we consider the general feedback given by the end user.” - GM-IT, Co-operative Bank “End user monitoring is very important. But practically it is not feasible all the time.” - Head IT, AMC “We don’t know how to do this or how to quantify this.” - Head-IT, Public Sector Bank “We have vendors like Karvy who monitor the application performance. We do not monitor the performance regularly. We look into the matter, only in case of issues or problems.” - IT-Head, AMC “We do look at the problem tickets logged in but to have a structured system for doing this is beyond our scope today.” - - IT-Head, Insurance Application performance monitoring at an end user level seems to be missing in the Indian BFSI industry. Only 47% say that there is consensus between IT and the end-user which indicates that application performance is perhaps measured at device level versus end-user level.
  8. 8. Industry: PSUs and Co-operative banks How they monitor: Do not measure performance in structured manner Rely on audits and branch surveys Industry: Private Banks How they monitor: Deploy tools to measure the performance Some of them rely on end-user feedback for performance checks The End User Conundrum Anunta Take | 6 Understandably, technology adoption has always been slow in this segment and many of these banks need to first upgrade their existing application infrastructure before they can begin to measure performance in a more technical manner. Anunta Take While they may be deploying tools, this is most likely happening at various levels of the enterprise network and not necessarily from an end-user standpoint per se. So while end-user feedback is important, there is often a disconnect when network diagnostics tell the IT team that it’s various components including endpoint, LAN, server, application software are functioning perfectly. • • • •
  9. 9. Industry: AMCs and institutional brokers How they monitor: Monthly dashboards where application performance is displayed Majority of them have vendors for monitoring Industry: Insurance How they monitor: Deploy tools for measurement Capture end user characteristics and validate them against the IT measurements The End User Conundrum | 7 Anunta Take Insurance companies, AMCs and institutional brokers seem to have identified the right metrics however the study shows that this measurement is not being done consistently neither are vendors or in house teams being held to SLAs. Most importantly, there is a need to move from trouble-shooting to proactive APM. • • • •
  10. 10. Key Findings The SLAs around application perfor- mance are not measured and moni- tored regularly due to reasons like lack of documented data and ambiguity around metrics. Most of the organizations have in-house team who manage application perfor- mance, but very few have SLAs around in-house team. SLA measurement is done on selective basis and only when a disaster strikes. Proactive monitoring is still not adopted by many at the end-user level for the application delivery management. • • • Critical inferences Failure to measure: Various metrics pertaining to application or server availability and capacity of network are useful to the IT department, but these may not be a true measure of IT efficiency as far as revenue and productivity generated. It is necessary that the IT in BFSI reflects on what metrics can provide a link to business productivity. Failure to redefine: IT advancement is rapid and the BFSI sector needs to be ready to refine and rework the metrics based on newer circumstances and invest in new application delivery management tools and infrastructure. Failure to understand the importance: Many organizations in the BFSI sector are aware about the need for SLAs, but fail to understand the impact it has on their business. If the SLAs are adopted in more stringent manner and monitored on regular basis then they can overcome the vulnerability of adopting new technology. • • • SLA measurement challenges What SLAs fail to measure | 8
  11. 11. Their Take What SLAs fail to measure | 9 Anunta Take From a methodology standpoint, every SLA that an organization enters into, needs to be linked to the end-user experience. Therefore, one must focus on ensuring that every technical SLA is interpreted into an end-user SLA, that every end-user SLA is enforceable and that the system is pro-active i.e. SLA defaults can be identified before they occur. “Business applications are our core responsibility and having an in-house team to monitor the application performance and delivery is crucial, but SLAs are neglected as it is expected of the team to meet the business requirements.” - VP-IT, Insurance “Measuring performance of vendors is important, but I have to admit that the SLAs are more on paper. We have to create a balance between performance and flexibility in dealing with vendors.” - Head IT, Health Insurance “With the in house team, the service levels are a given. So, frankly, we haven’t felt the need of having SLAs with our in house team.” - Head IT, Old Private Bank
  12. 12. Challenges in measuring business impact Measuring Business Impact | 10 Key Findings The firms in BFSI sector measure business impact of application performance periodically. Quantification of losses due to poor application performance is a major challenge. Loss in employee productivity was measured in terms of No. of volumes/ No. of people/ No. of hours lost due to incidents that cause a dip in application performance. Overall employee productivity loss due to poor application performance is in the range of 10-20% Critical Inferences The link between business and IT performance is at best tenuous, and often non-existent Revenue loss due to application performance issues is almost completely ignored with no correlation being cited between the two. Additionally while brand credibility does take a hit, once again, it is not being measured either through revenue loss or otherwise • • • • • • •
  13. 13. Their take Measuring Business Impact | 11 Anunta Take It is not surprising that user organizations are unable to quantify the revenue impact of poor application performance. The need of the hour therefore is two fold – first is the end-user SLAs we discussed earlier and the second is bringing about a fundamental shift in how one measures IT’s impact on the business. In that, one needs to move away from the TCO discussion, and look to a more tangible and measurable metric i.e. the cost of application delivery per user, per month. When this is seen in the context of revenues lost on account of application downtime, the cost-benefit analysis becomes far clearer and organizations are able to map IT to business goals much better. “In such situation, poor application delivery can lead to up to 5-10% of revenue losses.” - Head-IT, General Insurance “Thereisproductivitylossifan issue is unresolved in 30 minutes or 1 hour. When networks are not available for a day, the operational cost increases and there are productivity losses upto 30%.” - Head-IT, Old Private Bank “There is one thing that you can’t measure. Erosion in brand credibility.” - Head-IT, Public Sector Bank
  14. 14. Yes: Application uptime is critical to the BFSI sector But: There are no set standards or benchmarks to measure performance Yes: They measure application performance from an end-user perspctive But: The metrics are neither well defined nor monitored proactively Yes: They have SLAs in place But: These are at best at the device level and more often than not on paper only with minimal enforcement. End-user SLAs are non-existent. Yes: They periodically measure business impact of application performance But: Correct metrics and quantification are almost non-existent. Loss of revenue due to employee productivity issues caused by application downtime is not measured. Yes: The BFSI sector is an early adopter of technology and remains its largest buyer. But: The inability to identify and measure new age performance indicators such as application delivery leave them in an ambiguous grey area where technology and its efficacy are not necessarily seen together. Top 5 insights | 12 #1 #2 #3 #4 #5
  15. 15. Corporate Headquarters: Mumbai 4th Floor, Paradigm, B Wing, Mindspace, New Link Road, Near Toyota Showroom, Malad (West), Mumbai 400 064. Application Performance Guaranteed