SiteOps provides infrastructure solutions and services including architecture, evaluation, implementation, testing and assurance, and operations and maintenance. It can deliver end-to-end solutions for hardware and software infrastructure as well as networking and applications. SiteOps takes a layered approach to infrastructure development, ensuring reliability and scalability.
Caso studio dI un importante dealer Mercedes-Benz, un progetto web con connotazioni tipiche nell'automotive per un brand nel nuovo fino al multimarca dell'usato. Strategie e soluzioni SEO (analisi, strategia, implementazioni sul CMS), elementi di conversione e tecniche di monitoraggio attraverso vari strumenti.
The document analyzes the performance of major and minor ports in India. It finds that while average turnaround time and output per ship have improved, efficiency is impacted by outdated infrastructure, overstaffing, and bureaucratic red tape. The document recommends increasing private sector participation, boosting capacity, strengthening supply chain connectivity, and providing ports more autonomy to improve competitiveness.
La lista completa degli strumenti SEO. Tutti i tools SEO utili ad approfondimenti e analisi utili nell'agevolare il posizionamento sui motori di ricerca
The document discusses measuring and evaluating the performance and productivity of ports. It examines various factors that make analyzing port performance challenging, such as the large number of parameters involved, lack of reliable data, and local factors influencing results. The document focuses on defining common methodologies for measuring performance, specifically analyzing the duration of ships' stays in ports and the quality of cargo handling. It explores various key performance indicators used to evaluate efficiency related to issues like quay productivity, crane utilization, and ship turnaround times. The conclusion emphasizes the importance of developing a culture of performance measurement in ports using agreed-upon indicators to understand system performance and support decision-making.
come identificare i corretti KPI per monitorare l'andamento del tuo progetto web. I processi, gli strumenti e le best practice per un progetto di web analytics
SiteOps provides infrastructure solutions and services including architecture, evaluation, implementation, testing and assurance, and operations and maintenance. It can deliver end-to-end solutions for hardware and software infrastructure as well as networking and applications. SiteOps takes a layered approach to infrastructure development, ensuring reliability and scalability.
Caso studio dI un importante dealer Mercedes-Benz, un progetto web con connotazioni tipiche nell'automotive per un brand nel nuovo fino al multimarca dell'usato. Strategie e soluzioni SEO (analisi, strategia, implementazioni sul CMS), elementi di conversione e tecniche di monitoraggio attraverso vari strumenti.
The document analyzes the performance of major and minor ports in India. It finds that while average turnaround time and output per ship have improved, efficiency is impacted by outdated infrastructure, overstaffing, and bureaucratic red tape. The document recommends increasing private sector participation, boosting capacity, strengthening supply chain connectivity, and providing ports more autonomy to improve competitiveness.
La lista completa degli strumenti SEO. Tutti i tools SEO utili ad approfondimenti e analisi utili nell'agevolare il posizionamento sui motori di ricerca
The document discusses measuring and evaluating the performance and productivity of ports. It examines various factors that make analyzing port performance challenging, such as the large number of parameters involved, lack of reliable data, and local factors influencing results. The document focuses on defining common methodologies for measuring performance, specifically analyzing the duration of ships' stays in ports and the quality of cargo handling. It explores various key performance indicators used to evaluate efficiency related to issues like quay productivity, crane utilization, and ship turnaround times. The conclusion emphasizes the importance of developing a culture of performance measurement in ports using agreed-upon indicators to understand system performance and support decision-making.
come identificare i corretti KPI per monitorare l'andamento del tuo progetto web. I processi, gli strumenti e le best practice per un progetto di web analytics
Measuring Web Performance - HighEdWeb EditionDave Olsen
Today, a Web page can be delivered to desktop computers, televisions, or handheld devices like tablets or phones. While a technique like responsive design helps ensure that our websites look good across that spectrum of devices we may forget that we need to make sure that our websites also perform well across that same spectrum. More and more of our users are shifting their Internet usage to these more varied platforms and connection speeds with some moving entirely to mobile Internet. In this session, we’ll look at the tools that can help you understand, measure and improve the performance of your websites and applications. The talk will also discuss how new server-side techniques might help us optimize our front-end performance. Finally, since the best way to test is to have devices in your hand, we’ll discuss some tips for getting your hands on them cheaply. This presentation builds upon Dave Olsen’s “Optimization for Mobile” chapter in Smashing Magazine’s “The Mobile Book.”
Web Analytics - WHR 2012 - Guida pratica Google AnalyticsEnrico Ferretti
Web Analytics - WHR 2012: informazioni pratiche, consigli, case history e una guida completa all'utilizzo di Google Analytics. Relatore: Enrico Ferretti
Cosa sono le Web Performance e perché dovete preoccuparveneOlegs Belousovs
Talk culturale e esplorativo sulle Web Performance, fatto al WordPress Meetup di Torino il 12 ottobre 2016.
Il web lo facciamo tutti noi, per altre persone come noi, e anche se questo argomento, insieme all’accessibilità e alla sicurezza, può sembrare il meno «sexy», dovreste preoccuparvene non meno di che tema e plugin usare sul vostro sito, proprio per il rispetto delle persone che poi visitano i siti che fate e per rendere tutti insieme il web un posto migliore!
Video su YouTube: https://youtu.be/2nM6Mc13Gto
Alcuni suggerimenti per la scelta dei KPI appropriati in base al modello di business ed alla funzione di business. Come presentare i risultati con report e cruscotti.
Evaluation and performance measurement serve several key purposes:
1) They help ensure accountability, focus efforts on valuable results, and increase investor commitment.
2) They provide useful feedback to stakeholders to help them make wise decisions about resources.
3) They address quality improvement through systematic reflection on plans and progress.
Evaluation focuses on interventions while performance measurement focuses on results over time. Evaluation looks for qualitative stories while measurement looks for quantitative signals. The goal of evaluation is to provide useful feedback to influence decisions. There are various evaluation strategies and methods that can be used formatively to improve programs or summatively to examine outcomes and impacts. Performance measurement establishes metrics in key areas like effectiveness, efficiency, quality and time
Today, a web page can be delivered to desktop computers, televisions, or handheld devices like tablets or phones. While a technique like responsive design helps ensure that our web sites look good across that spectrum of devices we may forget that we need to make sure that our web sites also perform well across that same spectrum. More and more of our users are shifting their Internet usage to these more varied platforms and connection speeds with some moving entirely to mobile Internet.
In this session we’ll look at the tools that can help you understand, measure and improve the web performance of your web sites and applications. The talk will also discuss how new server-side techniques might help us optimize our front-end performance. Finally, since the best way to test is to have devices in your hand, we’ll discuss some tips for getting your hands on them cheaply.
This presentation builds upon Dave’s “Optimization for Mobile” chapter in Smashing Magazine’s “The Mobile Book.”
This talk was given at the Responsive Web Design Summit hosted by Environments for Humans.
This document discusses search analytics and Sematext's search analytics product. It summarizes Sematext's search analytics software, which collects search data using Flume and stores it in HBase. It then generates reports to help optimize search experiences. The software provides insights to help search providers and satisfies the needs of search users.
The Three Stages of Cloud Adoption - RightScale Compute 2013RightScale
Speaker: James Staten - VP and Principal Analyst, Forrester Research
As a RightScale user you are clearly a leading adopter of cloud computing, but have you matured your use of the cloud to the point that you are fully exploiting the advantages it provides? Most cloud users aren’t. In this session, Forrester Research VP and Principal Analyst James Staten will help you understand how to move from a cloud user to an optimizer to a profit maker as you progress your understanding of cloud economics and evolve your application design and deployment practices.
The document discusses establishing proper governance for portal management. It outlines setting the stage for portal governance by defining why it is needed, what aspects can be governed, and how to develop a governance framework. The framework establishes roles, responsibilities, and policies around portal management. It also identifies 14 tactical areas that can be governed, such as user roles, content publishing, and search. Governance ensures consistent behaviors across the portal by defining who is responsible for what aspects and the decision-making processes.
CREDITS / REFERENCE:
===================
http://www.themechrome.net/Images/Original/aaa.jpg http://activatedrinks.com/#/press http://www.webmedia-solutions.com/web-development-blog/wp-content/uploads/2009/01/web-2-0-logos.gif http://yiku.co/wp-content/uploads/2011/01/67bf1bb2g94361366956f690.jpg http://sixrevisions.com/web_design/the-evolution-of-web-design/ http://www.148apps.com/wp-content/uploads/2011/02/rdio.jpg http://www.desktopexchange.com/gallery/Widescreen-Wallpaper/evolution_1280x768 http://www.blogcdn.com/www.engadget.com/media/2010/12/bmw-connected-ios-app.jpg http://cdn.synthtopia.com/wp-content/uploads/2010/10/moog-filtatron.jpg http://venturebeat.files.wordpress.com/2011/09/gazelle-iphone-app.jpg http://phonerpt.com/wp-content/uploads/iOS-game-Cut-the-Rope-iPhone-iPad.jpg http://www.techtickle.com/wp-content/uploads/2010/11/facebook-app-iphone.jpg http://www.artofflightmovie.com/ http://makenetbg.com/wp-content/uploads/2011/06/servicios-makenet1.jpg http://charliecowan.files.wordpress.com/2011/08/noticeboard.jpg http://www.elipseagency.com/agency.html http://www.nikebetterworld.com/ http://2.bp.blogspot.com/_dCJEsq5nTZA/TJfyIHC87QI/AAAAAAAAAR4/S1hz8ltJY6U/s1600/Bel+Air+Split.jpg http://www.moodsofnorway.com/#/home http://www.ozoneeleven.com/wp-content/uploads/2010/05/Iron-Man-I-II_06.jpg http://www.netmagazine.com/opinions/mobile-apps-must-die http://www.mobileawesomeness.com http://upload.wikimedia.org/wikipedia/commons/thumb/a/a7/Web_2.0_Map.svg/800px-Web_2.0_Map.svg.png http://www.devirtuoso.com/2009/05/html-vs-flash-websites-which-is-better/ http://www.webdesignshock.com/wp-content/uploads/2011/08/responsive00.jpg http://www.ri.gov/img/responsive/banner.jpg
slide related marks or contents are owned by individuals, or their related entities. This slide is neither endorsed by nor affiliated with any of these entities :)
A Digital Asset Management (DAM) solution and strategy can be key enablers for your enterprise to produce and deliver content in today's multi-channel world. As an open platform for content management, Alfresco can be used to build your DAM infrastructure: from search, preview, and assembly to digital rights management, renditioning, and publishing.
Web Performance 101 presentation from Feb 2011 meetup, presented by Steve Thair from Seriti Consulting.
Covers the basics of why web performance is important for your business, the key "rules" and the tools that are available in the market today.
Enabling the Real Time Analytical EnterpriseHortonworks
This document discusses enabling real-time analytics in the enterprise. It begins with an overview of the challenges of real-time analytics due to non-integrated systems, varied data types and volumes, and data management complexity. A case study on real-time quality analytics in automotive is presented, highlighting the need to analyze varied data sources quickly to address issues. The Hortonworks/Attunity solution is then introduced using Attunity Replicate to integrate data from various sources in real-time into Hortonworks Data Platform for analysis. A brief demonstration of data streaming from a database into Kafka and then Hortonworks Data Platform is shown.
The Cloud Foundry Bootcamp document provides an overview of a Cloud Foundry bootcamp presented in Portland in 2012. It was written by Chris Richardson and presented by Monica Wilkinson and Josh Long. The agenda covers why Platform as a Service (PaaS) matters to developers, an overview of Cloud Foundry, getting started with Cloud Foundry, the Cloud Foundry architecture, using Micro Cloud Foundry, and consuming Cloud Foundry services.
How to consolidate Citrix Monitoring in a Single Pane of GlasseG Innovations
A recent survey by eG Innovations and xenappblog found that 68% of organizations are using 2-5 different tools for monitoring and managing their Citrix infrastructure. Multiple monitoring tools make it expensive to operate and troubleshoot IT infrastructure issues. Furthermore, a lot of manual effort is required to diagnose and fix performance issues.
Join Richard Faulkner, Enterprise Solutions Architect and CTP from Conversant Group, and John Worthington, Director of Customer Success at eG Innovations, and learn how you can get a single-pane-of-glass view of your Citrix infrastructure – from the client end to the virtual desktops/apps and even the backend applications.
See how you can:
--Monitor and get proactive alerts on the experience seen by Citrix users
--Track the performance of every layer and every tier of your Citrix infrastructure: NetScalers, StoreFronts, Virtual apps and desktops, WEM, PVS, License servers, etc.
--Troubleshoot in a single click and identify where the root-cause of a problem is: network, or storage, or virtualization, or the Citrix stack?
--Get insights to right-size and optimize your Citrix deployment
OSCON 2012: Design and Debug HTML5 Apps for Devices with RIB and Web SimulatorGail Frederick
The document discusses two open-source projects from Intel called Rapid Interface Builder (RIB) and Web Simulator that can be used to develop and debug HTML5 apps. RIB allows quick prototyping of web app UX through a drag-and-drop interface. Web Simulator allows debugging mobile web apps in Chromium by simulating device events and APIs. The document also discusses sample HTML5 apps created by Intel to demonstrate new web technologies and Intel's involvement in web standards.
Finding the right_portal_for_e_government_servicesQuestexConf
The document discusses selecting the right portal for e-government services. It describes different types of portals including link portals, thematic portals, agency portals, and data portals. It outlines factors to consider such as functionality, extensibility, cost, and complexity. The selection process involves identifying stakeholders, gathering requirements, issuing a request for proposal, evaluating responses, and conducting proof of concept testing before a final decision. Significant enterprise portal vendors and products are also listed.
Developing modular, polyglot applications with Spring (SpringOne India 2012)Chris Richardson
This document discusses developing modular, polyglot applications using Spring. It describes how to refactor a monolithic application into modular microservices along functional boundaries (Y-axis scaling). This improves scalability, enables independent development and deployment of each service, and allows adopting different technologies for each service. Spring is well-suited for building these types of applications since it supports a variety of languages and frameworks and its programming model aligns well with developing microservices.
SharePoint - Right Intro To DevelopmentMark Rackley
This document provides an overview of SharePoint development for developers. It discusses the stages of learning SharePoint, what SharePoint is as a platform, the different tools available for development including jQuery, SharePoint Designer, and Visual Studio. It also emphasizes the importance of using solution packages for deployment and engaging with the SharePoint community.
This document provides biographies for Dr. "Alex" Gouaillard and Dr. Ludovic Roux, who are experts in WebRTC testing. It discusses their backgrounds, careers, awards, and involvement in WebRTC standardization. It also outlines their company CoSMo Software's vision of contributing to open source to help grow the WebRTC community and ecosystem.
The document summarizes the Semantic Evaluation at Large Scale (SEALS) project. SEALS conducted large-scale evaluations of semantic technologies to help technology adopters and providers. It evaluated over 29 ontology engineering tools from 8 countries in its first campaign. SEALS developed services, methodologies, and infrastructure to support open, reproducible evaluations. This helped advance semantic technologies and their use.
This document outlines an agenda for a webinar on advanced strategies for testing responsive web applications. The webinar will cover key recommendations for testing responsive web designs at scale using automation and visual testing techniques. It will also discuss opportunities for improving performance and optimization of responsive web sites. The webinar will include demonstrations of automating tests across desktop and mobile browsers in parallel using cloud infrastructure as well as visual testing techniques using AI.
Measuring Web Performance - HighEdWeb EditionDave Olsen
Today, a Web page can be delivered to desktop computers, televisions, or handheld devices like tablets or phones. While a technique like responsive design helps ensure that our websites look good across that spectrum of devices we may forget that we need to make sure that our websites also perform well across that same spectrum. More and more of our users are shifting their Internet usage to these more varied platforms and connection speeds with some moving entirely to mobile Internet. In this session, we’ll look at the tools that can help you understand, measure and improve the performance of your websites and applications. The talk will also discuss how new server-side techniques might help us optimize our front-end performance. Finally, since the best way to test is to have devices in your hand, we’ll discuss some tips for getting your hands on them cheaply. This presentation builds upon Dave Olsen’s “Optimization for Mobile” chapter in Smashing Magazine’s “The Mobile Book.”
Web Analytics - WHR 2012 - Guida pratica Google AnalyticsEnrico Ferretti
Web Analytics - WHR 2012: informazioni pratiche, consigli, case history e una guida completa all'utilizzo di Google Analytics. Relatore: Enrico Ferretti
Cosa sono le Web Performance e perché dovete preoccuparveneOlegs Belousovs
Talk culturale e esplorativo sulle Web Performance, fatto al WordPress Meetup di Torino il 12 ottobre 2016.
Il web lo facciamo tutti noi, per altre persone come noi, e anche se questo argomento, insieme all’accessibilità e alla sicurezza, può sembrare il meno «sexy», dovreste preoccuparvene non meno di che tema e plugin usare sul vostro sito, proprio per il rispetto delle persone che poi visitano i siti che fate e per rendere tutti insieme il web un posto migliore!
Video su YouTube: https://youtu.be/2nM6Mc13Gto
Alcuni suggerimenti per la scelta dei KPI appropriati in base al modello di business ed alla funzione di business. Come presentare i risultati con report e cruscotti.
Evaluation and performance measurement serve several key purposes:
1) They help ensure accountability, focus efforts on valuable results, and increase investor commitment.
2) They provide useful feedback to stakeholders to help them make wise decisions about resources.
3) They address quality improvement through systematic reflection on plans and progress.
Evaluation focuses on interventions while performance measurement focuses on results over time. Evaluation looks for qualitative stories while measurement looks for quantitative signals. The goal of evaluation is to provide useful feedback to influence decisions. There are various evaluation strategies and methods that can be used formatively to improve programs or summatively to examine outcomes and impacts. Performance measurement establishes metrics in key areas like effectiveness, efficiency, quality and time
Today, a web page can be delivered to desktop computers, televisions, or handheld devices like tablets or phones. While a technique like responsive design helps ensure that our web sites look good across that spectrum of devices we may forget that we need to make sure that our web sites also perform well across that same spectrum. More and more of our users are shifting their Internet usage to these more varied platforms and connection speeds with some moving entirely to mobile Internet.
In this session we’ll look at the tools that can help you understand, measure and improve the web performance of your web sites and applications. The talk will also discuss how new server-side techniques might help us optimize our front-end performance. Finally, since the best way to test is to have devices in your hand, we’ll discuss some tips for getting your hands on them cheaply.
This presentation builds upon Dave’s “Optimization for Mobile” chapter in Smashing Magazine’s “The Mobile Book.”
This talk was given at the Responsive Web Design Summit hosted by Environments for Humans.
This document discusses search analytics and Sematext's search analytics product. It summarizes Sematext's search analytics software, which collects search data using Flume and stores it in HBase. It then generates reports to help optimize search experiences. The software provides insights to help search providers and satisfies the needs of search users.
The Three Stages of Cloud Adoption - RightScale Compute 2013RightScale
Speaker: James Staten - VP and Principal Analyst, Forrester Research
As a RightScale user you are clearly a leading adopter of cloud computing, but have you matured your use of the cloud to the point that you are fully exploiting the advantages it provides? Most cloud users aren’t. In this session, Forrester Research VP and Principal Analyst James Staten will help you understand how to move from a cloud user to an optimizer to a profit maker as you progress your understanding of cloud economics and evolve your application design and deployment practices.
The document discusses establishing proper governance for portal management. It outlines setting the stage for portal governance by defining why it is needed, what aspects can be governed, and how to develop a governance framework. The framework establishes roles, responsibilities, and policies around portal management. It also identifies 14 tactical areas that can be governed, such as user roles, content publishing, and search. Governance ensures consistent behaviors across the portal by defining who is responsible for what aspects and the decision-making processes.
CREDITS / REFERENCE:
===================
http://www.themechrome.net/Images/Original/aaa.jpg http://activatedrinks.com/#/press http://www.webmedia-solutions.com/web-development-blog/wp-content/uploads/2009/01/web-2-0-logos.gif http://yiku.co/wp-content/uploads/2011/01/67bf1bb2g94361366956f690.jpg http://sixrevisions.com/web_design/the-evolution-of-web-design/ http://www.148apps.com/wp-content/uploads/2011/02/rdio.jpg http://www.desktopexchange.com/gallery/Widescreen-Wallpaper/evolution_1280x768 http://www.blogcdn.com/www.engadget.com/media/2010/12/bmw-connected-ios-app.jpg http://cdn.synthtopia.com/wp-content/uploads/2010/10/moog-filtatron.jpg http://venturebeat.files.wordpress.com/2011/09/gazelle-iphone-app.jpg http://phonerpt.com/wp-content/uploads/iOS-game-Cut-the-Rope-iPhone-iPad.jpg http://www.techtickle.com/wp-content/uploads/2010/11/facebook-app-iphone.jpg http://www.artofflightmovie.com/ http://makenetbg.com/wp-content/uploads/2011/06/servicios-makenet1.jpg http://charliecowan.files.wordpress.com/2011/08/noticeboard.jpg http://www.elipseagency.com/agency.html http://www.nikebetterworld.com/ http://2.bp.blogspot.com/_dCJEsq5nTZA/TJfyIHC87QI/AAAAAAAAAR4/S1hz8ltJY6U/s1600/Bel+Air+Split.jpg http://www.moodsofnorway.com/#/home http://www.ozoneeleven.com/wp-content/uploads/2010/05/Iron-Man-I-II_06.jpg http://www.netmagazine.com/opinions/mobile-apps-must-die http://www.mobileawesomeness.com http://upload.wikimedia.org/wikipedia/commons/thumb/a/a7/Web_2.0_Map.svg/800px-Web_2.0_Map.svg.png http://www.devirtuoso.com/2009/05/html-vs-flash-websites-which-is-better/ http://www.webdesignshock.com/wp-content/uploads/2011/08/responsive00.jpg http://www.ri.gov/img/responsive/banner.jpg
slide related marks or contents are owned by individuals, or their related entities. This slide is neither endorsed by nor affiliated with any of these entities :)
A Digital Asset Management (DAM) solution and strategy can be key enablers for your enterprise to produce and deliver content in today's multi-channel world. As an open platform for content management, Alfresco can be used to build your DAM infrastructure: from search, preview, and assembly to digital rights management, renditioning, and publishing.
Web Performance 101 presentation from Feb 2011 meetup, presented by Steve Thair from Seriti Consulting.
Covers the basics of why web performance is important for your business, the key "rules" and the tools that are available in the market today.
Enabling the Real Time Analytical EnterpriseHortonworks
This document discusses enabling real-time analytics in the enterprise. It begins with an overview of the challenges of real-time analytics due to non-integrated systems, varied data types and volumes, and data management complexity. A case study on real-time quality analytics in automotive is presented, highlighting the need to analyze varied data sources quickly to address issues. The Hortonworks/Attunity solution is then introduced using Attunity Replicate to integrate data from various sources in real-time into Hortonworks Data Platform for analysis. A brief demonstration of data streaming from a database into Kafka and then Hortonworks Data Platform is shown.
The Cloud Foundry Bootcamp document provides an overview of a Cloud Foundry bootcamp presented in Portland in 2012. It was written by Chris Richardson and presented by Monica Wilkinson and Josh Long. The agenda covers why Platform as a Service (PaaS) matters to developers, an overview of Cloud Foundry, getting started with Cloud Foundry, the Cloud Foundry architecture, using Micro Cloud Foundry, and consuming Cloud Foundry services.
How to consolidate Citrix Monitoring in a Single Pane of GlasseG Innovations
A recent survey by eG Innovations and xenappblog found that 68% of organizations are using 2-5 different tools for monitoring and managing their Citrix infrastructure. Multiple monitoring tools make it expensive to operate and troubleshoot IT infrastructure issues. Furthermore, a lot of manual effort is required to diagnose and fix performance issues.
Join Richard Faulkner, Enterprise Solutions Architect and CTP from Conversant Group, and John Worthington, Director of Customer Success at eG Innovations, and learn how you can get a single-pane-of-glass view of your Citrix infrastructure – from the client end to the virtual desktops/apps and even the backend applications.
See how you can:
--Monitor and get proactive alerts on the experience seen by Citrix users
--Track the performance of every layer and every tier of your Citrix infrastructure: NetScalers, StoreFronts, Virtual apps and desktops, WEM, PVS, License servers, etc.
--Troubleshoot in a single click and identify where the root-cause of a problem is: network, or storage, or virtualization, or the Citrix stack?
--Get insights to right-size and optimize your Citrix deployment
OSCON 2012: Design and Debug HTML5 Apps for Devices with RIB and Web SimulatorGail Frederick
The document discusses two open-source projects from Intel called Rapid Interface Builder (RIB) and Web Simulator that can be used to develop and debug HTML5 apps. RIB allows quick prototyping of web app UX through a drag-and-drop interface. Web Simulator allows debugging mobile web apps in Chromium by simulating device events and APIs. The document also discusses sample HTML5 apps created by Intel to demonstrate new web technologies and Intel's involvement in web standards.
Finding the right_portal_for_e_government_servicesQuestexConf
The document discusses selecting the right portal for e-government services. It describes different types of portals including link portals, thematic portals, agency portals, and data portals. It outlines factors to consider such as functionality, extensibility, cost, and complexity. The selection process involves identifying stakeholders, gathering requirements, issuing a request for proposal, evaluating responses, and conducting proof of concept testing before a final decision. Significant enterprise portal vendors and products are also listed.
Developing modular, polyglot applications with Spring (SpringOne India 2012)Chris Richardson
This document discusses developing modular, polyglot applications using Spring. It describes how to refactor a monolithic application into modular microservices along functional boundaries (Y-axis scaling). This improves scalability, enables independent development and deployment of each service, and allows adopting different technologies for each service. Spring is well-suited for building these types of applications since it supports a variety of languages and frameworks and its programming model aligns well with developing microservices.
SharePoint - Right Intro To DevelopmentMark Rackley
This document provides an overview of SharePoint development for developers. It discusses the stages of learning SharePoint, what SharePoint is as a platform, the different tools available for development including jQuery, SharePoint Designer, and Visual Studio. It also emphasizes the importance of using solution packages for deployment and engaging with the SharePoint community.
This document provides biographies for Dr. "Alex" Gouaillard and Dr. Ludovic Roux, who are experts in WebRTC testing. It discusses their backgrounds, careers, awards, and involvement in WebRTC standardization. It also outlines their company CoSMo Software's vision of contributing to open source to help grow the WebRTC community and ecosystem.
The document summarizes the Semantic Evaluation at Large Scale (SEALS) project. SEALS conducted large-scale evaluations of semantic technologies to help technology adopters and providers. It evaluated over 29 ontology engineering tools from 8 countries in its first campaign. SEALS developed services, methodologies, and infrastructure to support open, reproducible evaluations. This helped advance semantic technologies and their use.
This document outlines an agenda for a webinar on advanced strategies for testing responsive web applications. The webinar will cover key recommendations for testing responsive web designs at scale using automation and visual testing techniques. It will also discuss opportunities for improving performance and optimization of responsive web sites. The webinar will include demonstrations of automating tests across desktop and mobile browsers in parallel using cloud infrastructure as well as visual testing techniques using AI.
The document provides an overview of the synquery platform technology. Key points include:
- Synquery is a configurable web system platform that uses a small RSD script for system configuration and provides seamless client-server connection with event loops and web sockets.
- It has a NoSQL architecture that refers to a "synchronized" client hash and uses broadcasting to apply changes from others to clients.
- The platform automatically generates forms, tables, and printable reports using technologies like JavaScript, jQuery, HTML5, CSS3, Node.js, and MongoDB.
Semantic Annotation and Search for Resources in the Next Generation Webajithranabahu
1) The document discusses using semantic annotations to improve discovery and integration of web services and resources. It proposes that modifying service descriptions with annotations is the best way to support future service consumption patterns.
2) The Kino project demonstrates annotating biology documents using ontologies and then indexing the annotations to enable faceted search. KinoW generalizes this approach by allowing annotations using schemas like Schema.org to be added via a browser plugin and published back to sites.
3) By annotating service descriptions found on web pages and indexing the annotations, it may be possible to conduct formal service discovery through general search engines and also extract formal descriptions from the human-readable pages.
Hire certified PHP and Open Source developers for creating robust Software & Web Applications built in Technologies such PHP, CakePHP, WordPress, Drupal, Joomla and other CMS and Open Source Technologies.
Similar to Measuring web performance. Velocity EU 2011 (20)
London web performance WPO Lessons from the field June 2013Stephen Thair
Web Performance - random lessons learnt from delivering WPO, Load testing and APM consulting in the UK. PLus a bit about WebPageTest Private Instances etc
Is the current model of load testing broken ukcmg - steve thairStephen Thair
- Steve Thair presented on whether the current model of load/performance testing is broken for modern web applications.
- He discussed how Betfair separated load injection from performance measurement due to the complexity of their system.
- The current model of load testing with waterfalls, single reports, and scripted user journeys is insufficient for continuous delivery and real user monitoring needs.
- Thair advocated for cheaper and more continuous methods like session replay from logs and APM tools to align with modern development practices.
Continuous Integration - A Performance Engineer's TaleStephen Thair
Andrew Harding from Betfair's presentation on web performance testing in a continuous integration environment. Covers some good reasons why and why not to do perf testing during continuous integration.
Web Performance Optimisation at times.co.ukStephen Thair
Optimizing dynamic websites like www.thetimes.co.uk and www.thesundaytimes.co.uk isn't an easy task!
Speeding up a site requires a "war plan" and having a clear vision, dedicated team, appropriate tools and most importantly speed comparison data with similar sites.
Mehdi Ali, Optimisation Manager for the Times websites, will show us how this strategy was applied for The Times and Sunday Times sites with great results.
Practical web performance - Site Confidence Web Performance SeminarStephen Thair
Over of Web performance optimisation (WPO) as well as some results from 25 web performance site analysis. Some information on Mobile web performance as well.
Measuring mobile performance (@LDNWebPerf Version)Stephen Thair
A presentation to the London Web Performance User Group covering the different ways of measuring Mobile web performance and some of the strength & weaknesses of each, depending on your needs.
Velocity 2011 Feedback - architecture, statistics and SPDYStephen Thair
A presentation on the Velocity 2011 conference from Pieter Ennes from Watchmouse to the London Web Performance Meetup Group. He covers some of this thoughts on the conference and also a brief overview of SPDY.
7 lessons from velocity 2011 (Meetup feedback session for London Web Performa...Stephen Thair
A presentation on the Velocity 2011 conference to the London Web Performance Meetup group by Stephen Thair (Seriti Consulting) covering some of the key messages and takeaways from this year's event.
Measuring Mobile Web Performance presentation at the London Ajax Mobile Conference 2nd July 2011. Covers the basics of web performance measurement and looks specifically at the measurement of page load speed from mobile devices.
Web performance and measurement - UKCMG Conference 2011 - steve thairStephen Thair
The slides from my presentation on web performance and measurement at the UK CMG conference in May 2011. It incorporates some of my slides from the earlier Web Performance 101 presentation with new material focussing on measuring web performance
An overview of web performance automation in the Production environment - "faster ways to make your website faster". Covers things like sample .htaccess files through to performance accelerators like mod_pagespeed and Aptimize through to DSA's like Cotendo.
Seatwave Web Peformance Optimisation Case StudyStephen Thair
A web performance optimisation case study presented by Seatwave at the London Web Performance Meetup, Jan 2011.
The PDF is in Landscape so you might be better to download it and then shift-ctrl-+ to rotate it clockwise in Adobe Acrobat Reader.
Configuration Management - The Operations Managers ViewStephen Thair
A presentation from the BCS COnfiguration Management Special Interest Group conference 2009. It gives "the other side of the story from a Operation Manager\'s perspective.
Test Expo 2009 Site Confidence & Seriti Consulting Load Test Case StudyStephen Thair
The document provides an overview of load testing a website, including tips on designing and conducting the test. It discusses determining test objectives and critical user journeys, setting targets for transactions and concurrent users, using analytics to inform the test design, and analyzing results to identify performance bottlenecks and take corrective action. Contact details are provided for vendors that can assist with load testing tools and services.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
9. “I keep six honest serving-men
(They taught me all I knew);
Their names are What and Why and When
And How and Where and Who.”
Rudyard Kipling, The Elephant’s Tale
(C) SERITI CONSULTING, 2011 08/11/2011 9
12. 4 Key “Raw” Metrics
• Time to First Byte (TTFB)
• Render Start Time
• DOMContentLoaded
• Page (onLoad) Load Time (PLT)
(C) SERITI CONSULTING, 2011 08/11/2011 12
13. What about “Above the Fold” time?
• How long to “render of the static stuff in
the viewable area of the page”?
Limitations of AFT
– Only applicable to lab setting
– Does not reflect user perceived latency based on
functionality
http://assets.en.oreilly.com/1/event/62/Above%20the%20Fold%20Time_%20Measuring%20Web%20Page%20Performance
%20Visually%20Presentation.pdf
(C) SERITI CONSULTING, 2011 08/11/2011 13
16. Apdex (t) =
(Satisfied Count + Tolerated Count / 2)
/ Total Samples
• A number between 0 and 1 that represents “user satisfaction”
• For technical reasons the “Tolerated” threshold is set to four
times the “Satisfied” Threshold so if your “Satisfied” threshold
(t) was 4 seconds then:
• 0 to 4 seconds = Satisfied
4 to 16 seconds = Tolerated
over 16 seconds = Frustrated.
http://apdex.org/
(C) SERITI CONSULTING, 2011 08/11/2011 16
17. PERFORMANCE IS MULTI-DIMENSIONAL
Multiple Metrics
For Multiple URLS
From Different Locations
Using Different Tools
Across the Lifecycle
Over Time
(C) SERITI CONSULTING, 2011 08/11/2011 17
22. WHERE – DEPENDS ON THE HOW & WHY…
Web Browser
Proxy Server
Internet
Synthetic versus Real-User
“Real User”
Firewall /
Synthetic Agent
Load-Balancer Web Server
(Reverse) Proxy Server
SPAN port or
Network tap
WiFi or 3G
Smartphone
Signal/Noise Ratio increases….
Network “Sniffer”
User/Browser metrics Server-based metrics
(C) SERITI CONSULTING, 2011 08/11/2011 22
23. The Synthetic
Versus
Real-User
Debate
(C) SERITI CONSULTING, 2011 08/11/2011 23
24. “…it's a question of when,
“Because you’re skipping the “last mile” not if active monitoring of websites
between the server and the user’s for availability and performance will
browser, you’re not seeing how your be obsolete.”
site actually performs in the real world” - Pat Meenan
- Josh Bixby
“You can have my active
monitoring when you pry it
from my cold, dead
hands…”
- Steve Thair
http://blog.patrickmeenan.com/2011/05/demise-of-active-website-monitoring.html
http://www.webperformancetoday.com/2011/07/05/web-performance-measurement-island-is-sinking/
http://www.seriticonsulting.com/blog/2011/5/21/you-can-have-my-active-monitoring-when-you-pry-it-from-my-co.html
(C) SERITI CONSULTING, 2011 08/11/2011 24
25. Observational Study
Versus
Experiment
(C) SERITI CONSULTING, 2011 08/11/2011 25
26. Experiment versus Observational Study
• Both typically have the goal of detecting a relationship between the
explanatory and response variables.
Experiment
• create differences in the explanatory variable and examine any resulting
changes in the response variable (cause-and-effect conclusion)
Observational Study
• observe differences in the explanatory variable and notice any related
differences in the response variable (association between variables)
http://www.math.utah.edu/~joseph/Chapter_09.pdf
(C) SERITI CONSULTING, 2011 08/11/2011 26
27. Observational Study = Real-User
• “Watching” what happens in a
given population sample
• We can only observe… and try to
infer what is actually happening
• Many “confounding variables”
• High signal to noise
• Correlation
(C) SERITI CONSULTING, 2011 08/11/2011 27
28. Location Bandwidth
Wired,
Latency
WiFi, 3G
Cached Operating
objects System
Addons &
Antivirus
Extensions
Browser Device
Time of
Day Context Resolution
(C) SERITI CONSULTING, 2011 08/11/2011 28
29. Observational Study = Real-User Experiment = Synthetic
• “Watching” what happens in a • We “design” our experiment
given population sample
• We chose when, where, what,
• We can only observe… and try to how etc
infer what is actually happening
• We control the variables (as
• Many “confounding variables” much as possible)
• High signal to noise • Lower signal to noise
• Correlation • Causation*
* OK, real “root cause” analysis will probably take a lot more investigation,
I admit… but you get closer!
(C) SERITI CONSULTING, 2011 08/11/2011 29
30. So which one is better?
Neither.
Complementary not Competing
“…Ultimately I'd love to see a hybrid model where
synthetic tests are triggered based on something
detected in the data (slowdown, drop in volume, etc) to
validate the issue or collect more data.
- Pat Meenan
(C) SERITI CONSULTING, 2011 08/11/2011 30
31. API Call to Synthetic
Real-User Monitoring Controlled Test and
Use RUM as “Reality Check”
detect a change in a compare to baseline.
page’s performance
From Observation… By controlling the variables To Experiment…
(C) SERITI CONSULTING, 2011 08/11/2011 31
32. Javascript
Back to the “How”…
Navigation timing
Objective Browser
Extensions
“Quantitative Custom Browsers
techniques”
Proxy timings
Web Server mods
Network sniffing
(C) SERITI CONSULTING, 2011 08/11/2011 32
33. 7 WAYS OF MEASURING WEBPERF
1. JavaScript timing e.g. Souder’s Episodes or Yahoo! Boomerang*
2. Navigation-Timing e.g GA SiteSpeed
3. Browser Extension e.g. HTTPwatch
4. Custom browser e.g. 3pmobile.com or (headless) PhantomJS.org
5. Proxy timing e.g. Charles proxy
6. Web Server Mod e.g. APM solutions
7. Network sniffing e.g. Atomic Labs Pion
(C) SERITI CONSULTING, 2011 08/11/2011 33
34. COMPARING METHODS…
Measurement Method
Navigation- Browser Custom Proxy Web Server Network
Metric JavaScript
Timing API Extension Browser Debugger Mod sniffing
Charles APM
Example Product WebTuna SiteSpeed HTTPWatch 3PMobile Pion
Proxy Modules
"Blocked/Wait" No No Yes Yes Yes No No
DNS No Yes Yes Yes Yes No No
Connect No Yes Yes Yes Yes No Yes
Time to First Byte Partially Yes Yes Yes Yes Yes Yes
"Render Start" No No Yes Yes No No No
DOMReady Partially Yes Yes Yes No No No
"Page/HTTP
Partially Yes Yes Yes Yes No Partially
Complete"
OnLoad Event Yes Yes Yes Yes No No No
JS Execution Time Partially No Yes Yes No No No
Page-Level Yes Yes Yes Yes Partially Partially Partially
Object Level No No Yes Yes Yes Yes Yes
Good for RUM? Yes Yes Partially No No Partially Yes
Good for Mobile? Partially Partially Partially Partially Partially Partially Partially
Affects Measurement Yes No Yes Yes Yes Yes No
(C) SERITI CONSULTING, 2011 08/11/2011 34
35. JAVASCRIPT TIMING – HOW IT WORKS
unLoad Event
var start = new Stick it in a Cookie Load the next page
Date().getTime()
PLT = onLoad Event
Send a beacon
var end = new
beacon.gif?time=plt end - start Date().getTime()
https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationTiming/Overview.html
(C) SERITI CONSULTING, 2011 08/11/2011 35
36. PROS & CONS OF JAVASCRIPT TIMING
Metric JavaScript • Pro’s
Example Product WebTuna
• Simple
"Blocked/Wait" No • Episodes/Boomerang provide custom timing for
DNS No developer instrumentation
Connect No
Time to First Byte Partially • Cons
"Render Start" No
DOMReady Partially • Relies on Javascript and Cookies
"Page/HTTP
Partially
Complete" • Only accurate for 2 nd page in journey
OnLoad Event Yes
JS Execution Time Partially • Can only really get a “page load metric” and a
Page-Level
Object Level
Yes
No
partial TTFB metric
Good for RUM?
Good for Mobile?
Yes
Partially
• “Observer effect” (and Javascript can break!)
Affects Measurement Yes
(C) SERITI CONSULTING, 2011 08/11/2011 36
37. NAVIGATION-TIMING – HOW IT WORKS
onLoad Event var plt = now -
Send a beacon
var end = new performance.timing.
beacon.gif?time=plt
Date().getTime() navigationStart;
(C) SERITI CONSULTING, 2011 08/11/2011 37
39. PROS & CONS OF NAVIGATION-TIMING
Metric
Navigation- • Pro’s
Timing API
Example Product SiteSpeed
• Even simpler!
"Blocked/Wait" No • Lots more metrics
DNS Yes
Connect Yes • More accurate
Time to First Byte Yes
"Render Start" No • Cons
DOMReady Yes
"Page/HTTP • Need browser support for API
Yes
Complete"
OnLoad Event Yes • IE9+ / Chrome 6+ / Firefox 7+
JS Execution Time No
Page-Level Yes • Relies on Javascript (for querying API & beacon)
Object Level No
Good for RUM? Yes
• “Observer effect”
Good for Mobile? Partially
Affects Measurement No
• Page-level only
(C) SERITI CONSULTING, 2011 08/11/2011 39
40. A BIT MORE ABOUT GA SITESPEED…
• Just add one line for basic, free, real-user monitoring!
_gaq.push(['_setAccount', 'UA-12345-1']);
_gaq.push(['_trackPageview']);
_gaq.push(['_trackPageLoadTime']);
• Sampling appears to vary (a lot!)
• 10% of page visits by design but reported 2% to 100%
• Falls back to Google Toolbar if available (but NOT javascript timing)
• Will probably make you think perf is better than it really is…
(C) SERITI CONSULTING, 2011 08/11/2011 40
44. BROWSER EXTENSION – HOW IT WORKS
That subscribes to
Write a browser Get your users to
a whole lot of API
extension… install it…
event listeners…
Send the timing
back to collector
E.g. showslow.com
https://developer.mozilla.org/en/XPCOM_Interface_Reference
(C) SERITI CONSULTING, 2011 08/11/2011 44
45. PROS & CONS OF BROWSER EXTENSIONS
Metric
Browser • Pros
Extension
• Very complete metrics
Example Product HTTPWatch
"Blocked/Wait" Yes
• Object and Page level
DNS
Connect
Yes
Yes
• No javascript (in the page at least)!!!
Time to First Byte Yes • Great for continuous integration perf testing
"Render Start" Yes
DOMReady Yes • Cons
"Page/HTTP
Complete"
Yes
• Getting users to install it…
OnLoad Event Yes
JS Execution Time Yes • Not natively cross-browser
Page-Level Yes
Object Level Yes • Some browsers don’t support extensions
Good for RUM?
Good for Mobile?
Partially
Partially
• Especially mobile browsers!
Affects Measurement Yes • “Observer effect”
(C) SERITI CONSULTING, 2011 08/11/2011 45
46. CUSTOM BROWSER – HOW IT WORKS
Add custom
Take some open Like WebKit or the instrumentation for
source browser code Android Browser performance
measurement
Send the timing back
Get users to to collector
install it…
E.g. 3pmobile.com
(C) SERITI CONSULTING, 2011 08/11/2011 46
47. PROS & CONS OF CUSTOM BROWSER
Metric
Custom
• Pros
Browser
Example Product 3PMobile • Great when you can’t use extensions / javascript / cookies
"Blocked/Wait" Yes
ie. For mobile performance e.g. 3Pmobile.com
DNS Yes
Connect Yes
• Great for automation e.g. http://www.PhantomJS.org/
Time to First Byte Yes
"Render Start" Yes
• Good metrics (depending on OS API availability)
DOMReady Yes
"Page/HTTP • Cons
Yes
Complete"
OnLoad Event Yes • Requires installation
JS Execution Time Yes
Page-Level Yes • Maintaining fidelity to “real browser” measurements
Object Level Yes
Good for RUM? No • “Observer Effect” (due to instrumentation code)
Good for Mobile? Partially
Affects Measurement Yes
(C) SERITI CONSULTING, 2011 08/11/2011 47
48. PROXY DEBUGGER – HOW IT WORKS
Change browser to
use debugging Proxy Debugging proxy
Export data to log
e.g. Charles or records each request
Fiddler
(C) SERITI CONSULTING, 2011 08/11/2011 48
49. PROS & CONS OF PROXY DEBUGGER
Metric
Proxy • Pros
Debugger
Example Product
Fiddler • One simple change to browser config
Proxy
"Blocked/Wait" Yes • No Javascript / Cookies
DNS Yes
Connect Yes • Can offer bandwidth throttling
Time to First Byte Yes
"Render Start" No
• Cons
DOMReady
"Page/HTTP
No
• Proxies significantly impact HTTP traffic
Yes
Complete"
• http://insidehttp.blogspot.com/2005/06/using-fiddler-for-
OnLoad Event No
JS Execution Time No
performance.html
Page-Level Partially
Object Level Yes
• No access to browser events
Good for RUM? No
• Concept of a “page” be problematic…
Good for Mobile? Partially
Affects Measurement Yes
(C) SERITI CONSULTING, 2011 08/11/2011 49
50. 6 Keep-Alive connections per SERVER
Versus
8 Keep-Alive connections TOTAL per PROXY
(Firefox 7.0.1)
(C) SERITI CONSULTING, 2011 08/11/2011 50
51. WEB SERVER MOD – HOW IT WORKS
Write a webserver Start a timer on Stop Timer on
Mod or ISAPI filter Request Response
Send the timing
back to collector
E.g. AppDynamics
http://www.apachetutor.org/dev/request
(C) SERITI CONSULTING, 2011 08/11/2011 51
52. PROS & CONS OF WEB SERVER MOD
Metric
Web Server • Pros
Mod
APM • Great for Application Performance Management (APM)
Example Product
Modules
• Can be used in a “hybrid mode” with Javascript timing
"Blocked/Wait" No
DNS No • Measuring your “back-end” performance
Connect No
Time to First Byte Yes • Can be easy to deploy*
"Render Start" No
DOMReady No • Cons
"Page/HTTP
Complete"
No • Limited metrics, ignores network RTT and only sees origin
OnLoad Event No
requests
JS Execution Time No • “Observer Effect” (~5% server perf hit with APM?)
Page-Level Partially
Object Level Yes • Concept of a “page” be problematic…
Good for RUM? Partially
Good for Mobile? Partially • Can be a pain to deploy*
Affects Measurement Yes
(C) SERITI CONSULTING, 2011 08/11/2011 52
53. NETWORK SNIFFING – HOW IT WORKS
Create a SPAN Promiscuous Assemble TCP/IP
port or network mode packet packets into
tap sniffing HTTP Requests
Record the timing Assemble HTTP
data in a Requests into
database “pages”
(C) SERITI CONSULTING, 2011 08/11/2011 53
54. PROS & CONS OF NETWORK SNIFFING
Metric
Network • Pros
sniffing
• No “observer effect” (totally “passive”)
Example Product Pion
"Blocked/Wait" No • Very common “appliance-based” RUM solution
DNS No
Connect Yes
• Can be used in a “hybrid mode” with Javascript timing
Time to First Byte
"Render Start"
Yes
No
• Can be easy to deploy*
DOMReady No • Cons
"Page/HTTP
Partially
Complete" • Limited metrics and only sees origin requests
OnLoad Event No
JS Execution Time No • Not “cloud friendly” at present
Page-Level Partially
Object Level Yes • Concept of a “page” be problematic…
Good for RUM? Yes
Good for Mobile? Partially • Can be a pain to deploy*
Affects Measurement No
(C) SERITI CONSULTING, 2011 08/11/2011 54
55. SUMMARY
• Performance is subjective (but we try to make it objective)
• Performance is Multi-dimensional
• Context is critical
• “Observational Studies AND Experiments”
• Real User Monitoring AND Synthetic Monitoring
• 7 different measurement techniques each with Pros & Cons
(C) SERITI CONSULTING, 2011 08/11/2011 55
56. @LDNWEBPERF USER GROUP!
• Join our London Web Performance Meetup
• http://www.meetup.com/London-Web-Performance-Group/
• Next Wednesday 16 th Nov - 7pm – London (Bank)
• WPO case study from www.thetimes.co.uk!
• Follow us on Twitter @LDNWebPerf
• #LDNWebPerf & #WebPerf
(C) SERITI CONSULTING, 2011 08/11/2011 56
Good Afternoon. My name is Stephen Thair and I am a freelance webops manager and performance specialist based in London, UK. I am also the organiser for the London Web Performance Meetup community. My topic today is “measuring web performance” and before we drill down into the specifics of measuring web performance I have one piece of bad news… <click>
And not just wrong because of esoteric stuff like the observer effect, or even the accuracy of our measuring tools… it’s wrong because of one major reason… <click>
And that reason is the human brain… The human brain does not have a metronomic clock in tick tocking away to a regular beat like the clocks we use to “measure” web performance… <click>
The key here is “subjective” and “variable” – there is a lot of stuff that the “numbers” won’t and CAN’T tell you about how the user perceives the performance of your website… Subjective – because YOUR experience is not MY experience!!!And when we say that performance is variable what do we mean… Well, we’ve all heard about time “slowing down” under the effects of adrenaline in emergencies… so perhaps if we are visiting a website that particularly gets the adrenaline flowing (ahem) our perception of time might “slow down” and what is in reality a “fast” website might appear slow. Conversely, there is a psychological state called “flow” where we can “lose track of time” because we are involved in a task, perhaps playing an online game, where suddenly we find an hour or two has gone past without us being aware of it. But our perception of performance is variable on other ways, too <click> Different for different sites – for different users (Age, Gender, emotional state (“Is the train about to leave, I’m running late”?), culture, level of experience) – at different stages in the user journey (e.g Navigation browser vs Search vs checkout) Different devices? – Mobile vs wireless vs wired?
Actual = what your “numbers” say it is…Expected = what your user wanted it to be… for your website… at this moment in time… which is to say expectations are not fixed & immutable!Perceived = How long the user “thought it took” with their subjective and variable perception of time…Remembered = What they told their friends down the pub about your crap & slow (or awesome&fast) website! Stoyan’s Talk at Velocity “The Psychology of Performance” – highly recommended… So… <click> http://velocityconf.com/velocity2010/public/schedule/detail/13019"Satisfaction = perception minus expectation" - David Maister
So we have talked about the <click> “Subjective” nature of web performance but our challenge as developers, testers and WebOps is to devise ways to make the subjective…. <click> Objective… and measure it! So how can we do that? Well, science has been struggling with this problem of “subjective” and “objective” for centuries and have developed different techniques to apply to each… <click>
To look at subjective data we use <click> Qualitative Techniques which are commonly used in the Social Sciences…. <click> case studies, focus groups, interviews <click> etc. If some of these sound familiar that’s because Many of these are the kinds of tools that people from the User Experience world use in their UE labs… And it’s worth making the point that you can start “measuring performance” very early on in the software development lifecycle, even with paper-based or simple static HTML click models by seeing how long people take to choose, decide, navigate… or even simply how many “clicks” they take to achieve a given task (less clicks = “faster performance”). And that’s all I am going to say on the qualitative side of things because as Web professionals we generally prefer to look at “objective” measures… <click>
And objectively normally means Quantitatively – means we can use NUMBERS…. And bring our statistical tools to bear… And there are 7 techniques of “HOW” to measure website performance But before we dive into the “HOW” we need to talk about some other things <click>
And those other things are what Rudyard Kipling called his “honest serving-men” – the what, why, when etc.So firstly, in terms of “what” do you want to measure do you care about <click> “objects”
About “objects”… or pages… or the entire user journey? Even if we are talking about pages we have multiple metrics to chose from… <click>
At a page and object level you have multiple metrics you can choose from… but I generally find that there are 4 I really care about…
So what are they <click>TTFB – how fast is my back-end responding? RenderStart – when does the user start to get visual feedback from the page – remember, it’s about Perception… but it has got to be meaningful… e.g. not just a CSS background changeDOMContentLoaded – How soon can my developers start hooking up their fancy Javascript stuff to the DOM?Page (onLoad) – when have all the elements on the page been loaded (and I can start all my deferred resource loading via Ajax!). One “new” metrics you might have heard about it “above the fold time”… <click>
AFT is basically designed to be a “render complete” timing, or at least a “render of the static stuff in the viewable area of the page”AFT is a nice idea… but it’s implementation is troublesome at present… 4 mins to calculate at present… But for most sites AFT = PLT… and but according to Pat from Webpagetest he has seen it range from ½ PLT to 2x PLT… Personally, I really like using the screen capture videos for this and look at it in comparison to previous versions, competitors etc…
This sort of video comparison that you can create with webpagetest.org… but when you rely on human judgement you are back into the subjective, again…So what other metrics might we be interested in? <click>
We start with the raw metrics… and then move up into counts (which we normally show as histograms) into the statistical measures and finally into artificial summary metrics like Apdex (which I will explain in a second) All of this data can be sliced and diced in your datawarehouse… but keep in mind that you can easily run into gigabytes and terabytes of data for a high-volume website in a month… so plan carefully! Ok, so back to Apdex - what is apdex <click> ApdexCalculated “summary” metricsStatisticalMetricsMeanModeMedianStdDevCounts/HistogramsHistogramsRaw MetricsConnection TimeRender Start TimePage Load Time“Above the Fold” time etc
Apdex is simple – split the page load time for every visit to your website into 3 bucketsSatisfiedToleratedFrustratedAnd you get a score from 0 – 1 that represents an “overall” measure of your site’s performance, across all URL’s, across all the visits during the time period. So why do we need a “single number” metric like Apdex? <click>
Because web performance is multi-dimensional…. Multiple Metrics For Multiple URLS From Different (measurement)Locations Using Different Tools Across the (software)Lifecycle Over TimeAnd it gives you a great number to stick on the plasma screen in the Ops area and a nice number to stick on your weekly report to your boss…But beyond just these metrics on how long a page took to load there is something else we need to record… <click> and that is “context”…
“Context” is the metadata about the “numbers” we have collected. They are the key to EXPLAINING why the performance number recorded is “good or bad”…
<click> “context” is the metadata about the measurement you made… what browser, from what geographical location, over what type of network etc etc. Without context your performance data is meaningless…
Context helps us answer this question!!! We can see that the mode of the page load time is about 0.9sec but what about this cluster out at around 2.7 and 2.9 secs? Maybe they are from a group of customers in one location, or using an older browser etc… But back to our 6 honest serving men and let’s look at who and when… <click>
We want to measure performance across the lifecycle (SDLC) and different teams will need to use different techniques to get the different metrics they need… We’ll talk more about this as we go through each technique…
Where you chose to measure the your web performance depends on your objectives… what exactly are you trying to measure, at what stage in the lifecycle, synthetic or real-user? The further we move away from the origin server the more network latency begins to dominate… and the more contextual factors come into play… and hence <click> the signal-to-noise ratio increases…But why have I drawn a distinction between “real users” and “synthetic agents” like monitoring or performance test agents? <click>
Well, because there is quite a debate raging out there at present on the future of web performance measurement… Synthetic = the active monitoring from Site Confidence / Keynote / Gomez / Pingdom we all know and love…Real-User = measuring the performance of real user visitors to your website using tools like Atomic Labs Pion, Triometric, Coradiant, Tealeaf etc.
A lot of people have strong opinions about whether we should be measuring “real-user” performance, or whether we should be synthetically making requests/transaction to test our website, regardless of whether those “synthetic requests” come from real-browsers or browser emulating agents. My view is that people who say either/or are missing the SCIENCE behind two different techniques… and we can look to the scientific method to help us conceptualise the difference between the two.
And science talks about two different quantitative techniques for gaining knowledge about the world, or in this case, web performance… and that is the “Observational Study” versus “Experiment” <click>
Both seek to detect a relationship – “what is making this page load slowly?”Create the difference… keeping everything else the same… controlling the experimental factors (as much as possible)For example, what happens when I measure with a different browser… but keeping everything else the same?
Observational Studies = Real-User monitoringWe can only measure what the OCCURS NATURALLY in the sample population. If no one visits that URL for a while, how will we know that it’s broken or slow? So what do I mean by “confound variables”? Wll, what I mean is <click> Context!
In “real-user” performance measurement the USER’s define the context… all the of the variables that might affect the number that you measure. So… how can I get some control back and reduce the number of confounding variables? Run an Experiment! <click>
<click> Experiment = Synthetic testing where we request the page we measure… <click> and hence we get to Design our “experiment” <click> We chose what to measure, from what location, over a fixed bandwidth, using a known agent/browser, with a known frequency (which means a stable sample size which is important for statistic when comparing means etc from a different URLs) <click> as we seek to control the confounding variables (as much as we can) to we get <click> a lower signal to noise ratio and hopefully get better at understanding the “root cause”… <click> I said “Hopefully”… <click> So which one is “better” RUM or Synthetic, Observational Study or Experiment <click>
So which one is better? <click> It depends on what you are trying to achieve… what’s your role… what’s your goal. Personally, if I am going to be woken up a 3am with an alert saying there is a problem with my website I’d like to have a higher degree to confidence in the alert than just because some ISP is having problems and giving their users a slow connection… I want to be alert about problems I can DO SOMETHING ABOUT and as an Ops Mgr I will “design my experiments” accordingly… But what about a “hybrid model” <click> where we move from RUM to Synthetic?... <click>
So how would Pat’s idea work? <click> The RUM to detect changes out there “in the real world”… <click> then pass the URL to test via an API <click> to try and narrow down the signal/noise in…(note we might be calling an entire SET of regression tests here… But the goal is <click> to Move from Observation <click> by controlling the variables <click> to a well defined ExperimentBut don’t forget you can also go the other way… to make sure that your “experiment” even vaguely reflects “reality” by cross-checking your synthetic results with what’s out there in the real world… which is exactly what any scientist does when they create an experimental model… they make sure that it correlates with reality!Ok, so we’ve covered off the who,what, when, where etc, lets get back to the “HOW”… <click>
Which is not to say that we can’t measure subjective things… qualitatively…
There are basically 7 techniques used to measure web performance:Each one has it’s pros and cons… easy of use, what it can measure, cost etc <click> x 7 So which technique is best? Depends on what you want to measure, where etc… comparing them all together we get <click>
So let’s look at each one in turn and how it works (in a very simplified way!)…
For example, the following JavaScript shows a naive attempt to measure the time it takes to fully load a page:<html><head><script type="text/javascript">var start = new Date().getTime();function onLoad() {var now = new Date().getTime();var latency = now - start; alert("page loading time: " + latency);}</script></head><body onload="onLoad()"><!- Main page body goes from here. --></body></html>
You can do custom page instrumentationby wrapping critical sections of the page in start/stop timersBut it relies on Javascript and Cookies… which might be disabled or not available (especially in Mobile). Only accurate for 2nd page.
THE BROWSER is doing most of the timing for us… Brilliant!!! No more OnBeforeUnLoad event! It all occurs “after” the page has loaded…No more cookies… lots more metrics <click>
Many more metrics in the Navigation-Timing spec… at a PAGE level, at least…
Biggest pro is that the TIMING is mostly done by the browser… so it’s less intrusive and more accurate with a much better set of metrics… CON – browser support…A bit more about “SiteSpeed” <click>
Free Navigation-Timing Based real-user performance monitoring…Also uses timings from the Google Toolbar… which leads us nicely into the next technique <click>
Filtered to remove all measurements > 60 and samples > 2. Scale on the left is 3.5 second intervals
Here is your histogram, turned on it’s side. 0-1 (23%), 1-3 (45%), 3-7 (22.
Excellent metrics including object level metrics… so you can get that nice waterfall diagram we know and love!
Basically you are sticking a recording mechanism, a proxy debugger like Charles or Fiddler, between you and the origin web server… and that proxy will record all your requests and the timings associated with them…
YOU ARE NO LONGER IN THE CLIENT… so no RenderStart, no OnLoad Event, and hence the concept of a page gets “fuzzy”… particularly with AJAXy pages… How does the proxy affect your traffic – probably the biggest potential issue is how the proxy server connects to the origin server. There is no guarantee that it’s going to use the same number of connections or re-use them in the same way that your browser will… From 2005 EricLaw - http://insidehttp.blogspot.com/2005/06/using-fiddler-for-performance.htmlIn Fiddler 0.9 and below, Fiddler never reuses sockets for anything, which may dramatically affect the performance of your site. Fiddler 0.9.9 (the latest beta) offers server-socket reuse, so the connection from Fiddler to the server is reused. Note that the socket between your browser and Fiddler is not reused, but since this is a socket->socket connection on the same machine, there's not a significant performance hit for abandoning this socket.So, Fiddler isn't suitable for timing. But this doesn't impact your ability to check compression, conditional requests, Expires headers, bytes-transferred, etc. Other than the actual timings, the browser does not behave much differently with Fiddler than without (and chances are good that your visitors are using some type of proxy). The browser will often send Proxy-Connection: Keep-Alive; this isn't sent without a proxy.IE will send Pragma: no-cache if the user hits F5 or clicks the refresh button; without a proxy, you have to hit CTRL+F5 to send the No-Cache value.The fact that a client-socket is abandoned can lead to extra authentication roundtrips when using the NTLM connection-based authentication protocol.
Write a mod or filter… that can see every request… start/stop timers… send them to a collector…
Web Server mods/ISAPI filters are how most of the APM solutions work. AppDynamics, Dynatrace, New Relic are all in this space, and some of them have implemented the javascript timing as well… Great for measuring the performance of your web tier and backend… not that useful in measuring page level performance unless you go the hybrid approach.
SPAN port then sniff the traffic…. Re-assemble the packets then the requests then the “page” then record the data…
The network sniffing approach is really the only true “passive” technology out there i.e. one that doesn’t have any “observer effect” on the measurement. Pion, Coradiant, Tealeaf, TriometricThere is a Not cloud friendly since EC2 doesn’t allow promiscuous mode appliances…
So in summary…. <Click x 6>And before we go, a quick plug for my user group… <click>
A great WPO case study next week from “The Times” newspaper… and then in December we have a special Xmas event hosted by Betfair!