Tips & Tricks
Proactive End-User Experience Monitoring of Enterprise IT Services
Like this presentation? Why not share!
SharePoint Best Practice and the Cloud
by Application Perfo...
K2 for Financial Institutions: Isla...
SharePoint in Higher Education
BrixPoint SharePoint Experts: Comp...
SharePoint in Higher Education Inst...
Commence accepting payments on the ...
Email sent successfully!
Show related SlideShares at end
Proactive End-User Experience Monitoring of Enterprise IT Services
Jun 17, 2010
Comment goes here.
12 hours ago
Are you sure you want to
Your message goes here
Be the first to comment
Fabio Paolo Mazzone
1 year ago
Number of Embeds
No notes for slide
Introductory level presentation on monitoring and analysis of IT services end-user experience. © Hypersoft Information Systems, 2006
Company and presenter information. © Hypersoft Information Systems, 2006
Presentation agenda – user experience: definition, metering opportunities and differences between them, statistical quality, business considerations. How do we view user experience? Is it just about how quickly a customer gets what heshe needs? What are the everyday needs of service users? Can they be grouped into separate units with a set of common properties and quality demands (corporate, social networking, archiving and data backup, etc.). Service may be functioning normally based on systems uptime and operation but the real users may still be disappointed – what is the cause? Different service judgment priorities for system operators, service managers and customers. What types of metering service quality and user experience are predominant in the current global context? What trends appear to emerge regarding user experience measurements. What is the most important experience metrics for companies, clients and other concerned stakeholders? What service levels should be considered? We have the data – but is it trustworthy? How to switch from firefighting business operation to strategic workflow mode? © Hypersoft Information Systems, 2006
Definition: ISO 9241-210 and others. User experience interface perception : Pleasing design, coloring, navigation. Usage simplicity, intuitiveness, rollback opportunities and ability to save a state of service usage process. User-specific factors : Fashion and trends (new communications media, social networks and technologies) . Way of life (corporate users, travelers, students, housewives, grouped business entities etc.). Environmental awareness. Mobility and devices being used. Perception of service execution: Are all transactions successful (i.e. is service delivery stable). Do all transactions fit within comfortable time range and do not deviate a lot? Monitoring of service availability only partially answers about user experience, what needs to be done is monitor the actual quality of transactional performance. © Hypersoft Information Systems, 2006
User vs. platform perception of service quality User Experience Monitoring . Aggregating, transforming and hiding underlying technical processes simplifies daily analysis and understanding of how users actually perceive a service. Easy to align to daily user needs, service contract terms, conditions and obligations. Understood by all concerned stakeholders: service managers, clients, corporate officers, administrators, regulators and auditors. Any business transaction related to user experience essentially has 2 main factors – success or failure, levels of comfort and speed. Platform monitoring. KPIs are too numerous and vague, hard to project on user perception of service quality. Misleading statistics – system could seem to function properly but users are still willing to switch service provider without explaining the reasons. Virtually impossible to align to daily and long-term strategic business goals. © Hypersoft Information Systems, 2006
Low-level KPI monitoring - should it be ignored or mixed with other service quality assessment mechanisms? A multitude of KPIs to choose from introduce a degree of selection complexity. Who can effectively use this type of metrics? Measuring actual service usage quality – monitoring actual service usage by internal and external clients. All transactions are being monitored i.e. all levels of end-user experience are tracked on the historical and topological basis. Stimulating synthetic quality assessment – mimicking real user actions with robotic or other simulations. The need to identify, analyse, select and implement modelling techniques of user transactions, assess which ones are business-relevant and which ones could be paid less attention. © Hypersoft Information Systems, 2006
Using platform monitoring mechanism to try to establish levels of end-user experience: No direct linkage to business transaction is provided. A transaction may utilize different protocols, transport and storage mechanisms, systems events and other low-level technical statistics. Due to complexities of infrastructure ecosystems and the underlying processes it is therefore hard to understand by an ordinary non-technical folks wishing to assess how users perceive a service. Even technical stuff can spend a lot of time analysing and decoding process messages. Lots of KPIs to determine. Which ones should be focused and are they relevant at all? Focuses on infrastructure uptime. The servicing system may be up and running, but does it tell us anything specific regarding user comfort? © Hypersoft Information Systems, 2006
© Hypersoft Information Systems, 2006 Deployed agents automatically gather data on service experience and usage on further processing it for analysis purposes – no manual labor needed. Advantages: - Direct verification of quality of service and real-world user experience of the actual IT service users provides direct linkage to service contract, billing and bonuspenalty levels. Understood by service provider and by service receiver. Can also be independently audited and verified, this helps to resolve any conflicts arisen. Incidents can be analysed and proactively channeled into improving the quality of service and actual user experience. Disadvantages: Incidents could still take place on parts of service that were not monitoredanalysed before – hence increased risk of customer alienation. Needs careful preparation. If monitoring agents are implemented within the servicing landscape there is a possibility of increased processing, traffic and storage burden. Since most services are used in non-consistent manner, the ongoing analysis is complicated by missing gaps of service performance. Trending and forecasting can also be less accurate due to this fact.
Modeling real users using a service from different global locations. Benefits: Consistent metrics simplifies analysis process – agents can be deployed from the outside performing common transactions at fixed time intervals. Professionals are notified on possible user experience (service quality) downfall before actual users might experience it – this could be channeled into the proactive resolution of likely incident. Business alignment is still maintained – since monitoring agents mimic actual users there is no need to slip into numerous technical KPIs pond. Can be used for improvement of standardization process – select transactions reflect on most commonsignificant user actions. No need for internal deployment, metering can be organized from the outside (eg. Web service usage through browser interface from different locations). Pitfalls: - Have to select which types of transactions to monitor, therefore a careful initial analysis should be carried out to rank transactions, their average occurrence during normal business hours and which of them are critical for business survival and expansion. A chance of metrics misleading analysts – it is not real user experience, but modeled one. Can introduce more burden on ecosystem even if monitoring agents are placed on the outside – is it justified? (compare with actual user experience measurements). Some transactions can only be measured partially or systems have be configured to ignore them (online purchase for example). © Hypersoft Information Systems, 2006
Direct service availability measured with conventional probing (for example pings, tcpip connection, http) does not fully reflect of the status of user experience with the service usage. Correctly identifying what types of transactions constitute to business survival in the long-run presents an opportunity for proactive management of provided service. In the example above, daily conventional probing indicates normal service operation while common types of modeled user action transactions needed for sustained operation and which reflect on existing users loyalty show worrying transaction execution performance. Yet again, business critical transactions that are linked to corporate growth potential also display figures different from the web-service availability, hinting on lowering levels of end-user experience during user registration process. Businesses and service providers should carefully identify and prioritize transaction types to monitor or simulate and which are directly linked to corporate stability. Selection and monitoring of non-business critical transactions is likely to introduce some confusion during the analysis of service performance with respect to end-user experience, leaving managers with decision-making process that is more reactive and focuses in the wrong strategic direction. © Hypersoft Information Systems, 2006
Real-world and modeled user experience could tend to vary greatly during different service hours and in different locations. Metering statistics should be adequately dealt with through filtering and visualization capabilities to maximize the alignment to contractual terms in order to get the most out of IT monitoring tools, and to simplify service quality analysis. While some data can show either inadequate or excellent performance over a fixed time range, IT professionals are likely to get stuck with more manual labour trying to separate business-relevant statistics from all the rest. Topological monitoring introduces another important element into assessment of user experience. Whilst it may be hard to deploy and manage metering agents on every client machine, one saving option for accurate end-user experience analysis is monitoring service delivery from sites and locations that possess the biggest client base. © Hypersoft Information Systems, 2006
Real-world transactions give us an invaluable opportunity to adapt and correct service quality alongside with user experience. Any business process is exactly reconstructed. Transaction events pinpoint where in the servicing landscape end-user experience was dropped, hinting for further investigation. Once analysed, professionals can proactively channel service delivery to eliminate the chances of lowering user experience through infrastructure restructuring, optimization and other necessary means. Due to transactional nature, metrics retains complete accuracy, which means QUALITY – no other metering means (estimation for example) offer this sort of knowledge. This is useful for aggregation and high-level monitoring (metrics is understood by top decision makers, clients and others), as well as auditing purposes that help to effectively resolve disputes. © Hypersoft Information Systems, 2006
Modeled user experience monitoring presents a wealth of options for controlled service delivery Virtually any type of business transaction can be created, implemented and run on the automatic scheduled basis. Like actual user experience statistics, can be aggregated into different time scales (daily, weekly, monthly, annual) to help with trending and forecasting activities. Metrics consistency means user experience standard distribution (minimum, maximum, average timings to finish transaction) can always be checked and assessed. Applicable and can be implemented in many standard and custom services – web, hosting, messaging, remote back-up, instant messaging, collaboration. Additionally, is possible to monitor remotely, within the organization and across cloud landscapes. © Hypersoft Information Systems, 2006
A viable option for tracking and analysis of user experience is combined monitoring of service responsiveness with synthetic and actual transactions. Service portfolios are better managed with direct knowledge on systems status and their interaction with the actual users and still having enough information for making accurate forecasts about dynamics of common transaction execution. This way weak spots of two types of metering techniques are eliminated providing a solid foundation for proactive decision making process. Furthermore, while actual user experience statistics is used for settling disputes and proactive analysis of service delivery and support activities, “robotic” user experience can shed light on service quality from parts of global markets which are not serviced yet, allowing for service expansion activities to be carried out within a more controlled business environment as time passes. © Hypersoft Information Systems, 2006
What should we consider right from the start? Incorrect initial definitions of key user experience KPIs will lead to wrong strategic moves at the operational level once the service is launched. Define, identify, prioritize and monitor only those transactions that constitute a business value. Getting stuck to platform and application monitoring hides the big picture. Its not about application performance – its about service performance that users care about most. Complex global environments and clouds connect variety of different systems – evaluating them basing on physical locations breaks end-to-end principle of service delivery, only logical breakdown of delivered services statistics provides an unbiased view of end-user experience. © Hypersoft Information Systems, 2006
The right metrics to monitor means managers are certain about their future business goals and do not have to resort to tactical operating mode. Having KPIs defined in technology-agnostic manner would save time for service professional from translating data on user experience that is collected through a variety of technical performance indicators into business relevant information. Metrics that is understood by both service providers and service receivers ensures service delivery process would be robust and secured. Since user experience performance metering is defined and accepted by both parties the chances of any misunderstanding during service execution are minimized, also saving both on personnel spending and possible penalties which could arise. Remember that proactive service demands careful analysis in order to find the root cause of service breakdown and perform the necessary activities to overcome it in the future. This implies having one important element – a reliable data of history of business processes. Any metrics which is drifted away from the actual user experience would introduce the wrong values and variables in the equation leading to wrong strategic moves. On the other hand, reactive service is still reactive even if managed in extremely responsive manner. An e-mail alert on service breakdown during the night could prompt a company to quickly make the necessary steps to correct user experience levels but if there are clients who use it at night or in different time zone their levels of trust and loyalty are likely to deteriorate. Many belief that cloud and infrastructure complexity with the end-to-end principle lowers user experience metering accuracy due to a great number of dependencies, correlations and events. Yet it is still possible to acquire precise data on service performance through transactional monitoring. Tools that concentrate on business relevant KPIs and collect full history of business processes provide accurate analysis framework for long-term business activities. © Hypersoft Information Systems, 2006
Monitoring user experience across all systems and platforms. Once transaction is defined and monitored, it could be processed to suit a great variety of user experience-related problems such as: Applied to service levels and service portfolio catalogue. Tracked by event types that introduced significant changes to user perception of service. Aggregated for highest level dashboard viewing of the general “service health” figure from the perspective of service users. Used for trending and forecasting activities in mid- and long-run. Analysed from prudent (what are the worst performing parts of services, how do we deal with them) and opportunistic (which ones are best performers, can we migrate their business processes to other parts of service delivery infrastructure) points of view through use of different filtering business rules. Aggregated to logically unite different business units (not only users, but offices, countries and other social communities) to analyse the overall perception of quality of service. Etc. © Hypersoft Information Systems, 2006
Conclusion A wealth of IT services introduces different definitions for what is the actual user experience. While some of them require 247 access to service interface with no strict demands for service execution time (data backup during the night?) others are highly dependent on the speed of service performance which is directly linked to customer loyalty. IT professionals should consider user needs prior to service delivery, and make those needs quantifiable and verifiable during service execution. In addition to reasonable service contract definitions, surveys carried out and daily communication with clients helps to uncover what needs to be monitored and on what basis, as well as whether there has been a change in viewing service quality as a whole. Make use of both types of metering techniques – this leverages on the benefits provided by them, effectively making the decision making process less prone to error and more strategy oriented. Consolidating synthetic measurements with real-world user experience statistics helps to view servicing landscape as a whole – consistent performance view with robotic measurements simplifies trending and proactive management while real-world data offers a thorough grounds for analysis as well as dealing with client disputes and compliance issues. © Hypersoft Information Systems, 2006
Contact information: Website: hypersoft.com E-mail: [email_address] © Hypersoft Information Systems, 2006
Proactive End-User Experience Monitoring of Enterprise IT Services
1. © Hypersoft Informationssysteme GmbH, 2010 Proactive End-User Experience Monitoring of Enterprise IT Services Dr. Serguei Dobrinevski [email_address]
© Hypersoft Informationssysteme GmbH, 2010 Introduction Hypersoft Information Systems <ul><li>More than 200 enterprise customers for business service </li></ul><ul><li>metrics </li></ul><ul><li>More than 2 million users in organisations measured by our </li></ul><ul><li>software </li></ul><ul><li>Offices in Germany, France, USA and Belarus </li></ul><ul><li>Speciali z es in data collection and analysis of major IT </li></ul><ul><li>services </li></ul><ul><li>Cooperation with final customers and service providers </li></ul>Dr . Serguei Dobrinevski <ul><li>Degree in physics </li></ul><ul><li>Founded Hypersoft in 1993 </li></ul><ul><li>Developed and formalised strategic concepts for Hypersoft </li></ul><ul><li>products </li></ul>
© Hypersoft Informationssysteme GmbH, 2010 Agenda End-User Experience: Common Monitoring Issues <ul><li>Definition </li></ul><ul><li>User vs. System measurement of service quality </li></ul><ul><li>Common metering techniques </li></ul><ul><li>Metering scenarios and metrics selection </li></ul><ul><li>How to tell the difference between high and low quality metrics </li></ul><ul><li>Proactive analysis </li></ul>
<ul><li>User perception of interface </li></ul><ul><li>Topology-specific factors </li></ul><ul><li>Action sets corresponding to typical activities </li></ul>© Hypersoft Informationssysteme GmbH, 2010 End-User Experience: How Should We View It Transactional monitoring of business processes Service Availability Service Quality %
© Hypersoft Informationssysteme GmbH, 2010 Measuring User Experience <ul><li>End-to-end orientation is agnostic of underlying technical processes </li></ul><ul><li>Goes in line with daily needs </li></ul><ul><li>Aligned to contractual obligations </li></ul><ul><li>High degree of comprehension by all stakeholders </li></ul><ul><li>Relevant transaction definition </li></ul><ul><li>Multitude of KPIs – which ones should we consider? </li></ul><ul><li>Most KPIs are vague to ordinary users </li></ul><ul><li>Problems with penalties – metrics do not match perceived outages </li></ul><ul><li>Hard to align to business strategy </li></ul>User vs. Platform
Metering Techniques <ul><li>Low-level platform monitoring? </li></ul><ul><li>Measuring actual user experience </li></ul><ul><li>Performing test transactions </li></ul>© Hypersoft Informationssysteme GmbH, 2010
Platform Monitoring © Hypersoft Informationssysteme GmbH, 2010 <ul><li>No link to business transaction </li></ul><ul><li>Hard to comprehend </li></ul><ul><li>Complex sets of KPIs </li></ul><ul><li>More emphasis on service availability, not service quality </li></ul>TCPIP connect at port 8080 = 206ms; ICMP ping = 31ms; Security event Error code = 2572 Upload documents for processing 98 sec. Average book order 26 sec. Request support 4 sec. ???
© Hypersoft Informationssysteme GmbH, 2010 Actual Service Quality Measurements Deploy monitoring agents on client machines to gather service usage data <ul><li>Actual experience measured </li></ul><ul><li>Direct linkage to service contract </li></ul><ul><li>Comprehendible </li></ul><ul><li>Conflicts and incident analysis </li></ul>Non-standard desktop builds Deployment challenges Non-standard user behavior
Synthetic Transactional Probing <ul><li>Consistent </li></ul><ul><li>Proactive checks </li></ul><ul><li>Still focuses on business transaction </li></ul><ul><li>Standard transactions </li></ul><ul><li>Can be non-intrusive </li></ul>© Hypersoft Informationssysteme GmbH, 2010 External deployment of “user “agents Which transactions are business relevant? Synthetic is not actual users Greater load on service ecosystem – is it justifiable? Could trigger unwanted activity
What User Experience to Monitor? Define Business Transactions: Cloud Service Provider Example © Hypersoft Informationssysteme GmbH, 2010 Common Critical Direct
User Experience Metering Considerations © Hypersoft Informationssysteme GmbH, 2010 Measure timing of complex and elementary transactions Different user experience from different global locations
Actual User Experience – Analysis Base. © Hypersoft Informationssysteme GmbH, 2010 <ul><li>Real users monitoring – transaction reconstruction </li></ul><ul><li>Proactive service delivery measurement </li></ul><ul><li>Complete accuracy and trustworthiness </li></ul><ul><li>Auditing and verification opportunities </li></ul>Events to consider
Mirroring Experience – Business Transactions © Hypersoft Informationssysteme GmbH, 2010 Sample Web Availability Transaction – Service Registration Users sign-up Registration Support request Content search … Enter name, e-mail, password, confirm password Load website Press sign up button Etc.
Combined Metering Opportunities <ul><li>Mix actual and modeled user experience assessments </li></ul><ul><li>Actual real user data </li></ul><ul><li>Consistent “robotic” user experience </li></ul>© Hypersoft Informationssysteme GmbH, 2010
Monitoring Pitfalls Wrong transactions to monitor – do the homework on service and metrics definitions. Platform and application dependency – aim for platform-agnostic reporting. Systems complexity – structure logically and operationally rather than physically © Hypersoft Informationssysteme GmbH, 2010 Assess Prepare Launch
Deciding on KPIs for Optimal Service Delivery <ul><li>Start with user experience service definitions and keep them technology agnostic </li></ul><ul><li>Develop metrics that can be explained to business users, and do that prior to looking at the delivering applications </li></ul><ul><li>Proactive service means analysing the reasons for the </li></ul><ul><li>failures. Reactive service remains reactive even if the </li></ul><ul><li>reaction is quick. </li></ul><ul><li>High level of accuracy is actually possible, as opposite to wide - spread disbelief. </li></ul>© Hypersoft Informationssysteme GmbH, 2010
Transaction Integration as a Key © Hypersoft Informationssysteme GmbH, 2010 2. Monitoring of transaction and its elements on multiple systems and platforms. 1. Transaction definition at the business level
Conclusion <ul><li>User experience differs between user types and services </li></ul><ul><li>Find out what users actually do and construct metering that reflects their daily needs </li></ul><ul><li>Stay focused on the long-term goals – quality user experience data would mean correct strategic choices are being made </li></ul><ul><li>Predictable means proactive </li></ul>© Hypersoft Informationssysteme GmbH, 2010
Contact Information <ul><li>www.hypersoft.com </li></ul><ul><li>[email_address] </li></ul>© Hypersoft Informationssysteme GmbH, 2010
Email sent successfully..