Games always have too much stuff to render, but are often set in occluded environments, where much of what is in the view frustum is not visible to the camera. Occlusion culling tries to avoid rendering objects which the player can't see.
Discussion of occlusion for games has tended to focus on the use of GPU pixel counters to determine whether or not an object is visible. This places additional stress on a limited resource, and has latency which has to be worked around, requiring a more complex pipeline.
For KILLZONE 3 we took a different approach - we use the SPUs to render a conservative depth buffer and perform queries against it. This allows us to cull objects very early in the frame, avoiding any pipeline costs for invisible objects.
This presentation talks about the ideas (and dead ends) we explored along the way, as well as explaining in detail what we ended up with.
Preparation of percent solution and calculation.
2. Preparation of aromatic water.
3. Preparation of syrups.
a) Phenobarbitione-Na syrup.
b) Chlorpheniramine maleate syrup.
c) Promethazine-HCl syrup.
d) Iron syrup.
4. Preparation of suspensions
a) Paracetamol suspension
b) Antacid suspension
c) Chalk powder suspension
5. Preparation of emulsion and identification of type of emulsion
a) Primary emulsion by dry gum method and wet gum method
b) Castor oil emulsion
Games always have too much stuff to render, but are often set in occluded environments, where much of what is in the view frustum is not visible to the camera. Occlusion culling tries to avoid rendering objects which the player can't see.
Discussion of occlusion for games has tended to focus on the use of GPU pixel counters to determine whether or not an object is visible. This places additional stress on a limited resource, and has latency which has to be worked around, requiring a more complex pipeline.
For KILLZONE 3 we took a different approach - we use the SPUs to render a conservative depth buffer and perform queries against it. This allows us to cull objects very early in the frame, avoiding any pipeline costs for invisible objects.
This presentation talks about the ideas (and dead ends) we explored along the way, as well as explaining in detail what we ended up with.
Preparation of percent solution and calculation.
2. Preparation of aromatic water.
3. Preparation of syrups.
a) Phenobarbitione-Na syrup.
b) Chlorpheniramine maleate syrup.
c) Promethazine-HCl syrup.
d) Iron syrup.
4. Preparation of suspensions
a) Paracetamol suspension
b) Antacid suspension
c) Chalk powder suspension
5. Preparation of emulsion and identification of type of emulsion
a) Primary emulsion by dry gum method and wet gum method
b) Castor oil emulsion
Structural features of Cinchona alkaloids
1- The basic skeleton of Cinchona alkaloids is Ruban-9-Ol.
2- Ruban nucleus is a combined skeleton formed from a quinoline ring attached to a quinuclidine ring (a bicyclic ring contain N) through methylene group.
Walkthrough of the key Audio Technologies used in Frostbite; HDR Audio, Master Unit and the modular design. An in-depth sound design discussion for Battlefield: Bad Company covering both the sandboxed multiplayer and the story driven single-player
Adventures in Observability - Clickhouse and InstanaMarcel Birkner
Monitoring is a hot topic for ClickHouse users. Joann Buch and Marcel Birkner of Instana and Robert Hodges of Altinity discuss how to create a complete monitoring solution for ClickHouse using Instana. We'll start with an overview of the Instana product, followed by a review of important metrics you should track on ClickHouse. We'll then walk through how to set up Instana on ClickHouse. We'll finish with a discussion of how Instana itself uses ClickHouse internally.
Structural features of Cinchona alkaloids
1- The basic skeleton of Cinchona alkaloids is Ruban-9-Ol.
2- Ruban nucleus is a combined skeleton formed from a quinoline ring attached to a quinuclidine ring (a bicyclic ring contain N) through methylene group.
Walkthrough of the key Audio Technologies used in Frostbite; HDR Audio, Master Unit and the modular design. An in-depth sound design discussion for Battlefield: Bad Company covering both the sandboxed multiplayer and the story driven single-player
Adventures in Observability - Clickhouse and InstanaMarcel Birkner
Monitoring is a hot topic for ClickHouse users. Joann Buch and Marcel Birkner of Instana and Robert Hodges of Altinity discuss how to create a complete monitoring solution for ClickHouse using Instana. We'll start with an overview of the Instana product, followed by a review of important metrics you should track on ClickHouse. We'll then walk through how to set up Instana on ClickHouse. We'll finish with a discussion of how Instana itself uses ClickHouse internally.
Prometheus - Intro, CNCF, TSDB,PromQL,GrafanaSridhar Kumar N
https://www.youtube.com/playlist?list=PLAiEy9H6ItrKC5PbH7KiELiSEIKv3tuov
-What is Prometheus?
-Difference Between Nagios vs Prometheus
-Architecture
-Alertmanager
-Time series DB
-PromQL (Prometheus Query Language)
-Live Demo
-Grafana
Things You MUST Know Before Deploying OpenStack: Bruno Lago, Catalyst ITOpenStack
Audience: Advanced
About: Real world lessons and war stories about Catalyst IT’s experience in rolling out an OpenStack based public cloud in New Zealand.
This presentation will provide tips and advice that may save you a lot of time, money and nights of sleep if you are planning to run OpenStack in the future. It may also bring some insights to people that are already running OpenStack in production.
Topics covered will include: selection of hardware for optimal costs, techniques that drive quality and service levels up, common deployment mistakes, in place upgrades, how to identify the maturity level of each project and decide what is ready for production, and much more!
Speaker Bio: Bruno Lago – Entrepreneur, Catalyst IT Limited
Bruno Lago is a solutions architect that has been involved with the Catalyst Cloud (New Zealand’s first public cloud based on OpenStack) from its inception. He is passionate about open source software, cloud computing and disruptive technologies.
OpenStack Australia Day - Sydney 2016
https://events.aptira.com/openstack-australia-day-sydney-2016/
Dark launching with Consul at Hootsuite - Bill MonkmanAmbassador Labs
Dark Launching (A.K.A. Feature Flagging) is a technique and mindset that has truly shaped the way we write, test, and deploy code at Hootsuite. It gives our team realtime, fine-grained control over our production systems which helps to prevent issues from reaching users, and build developer confidence in a culture of pushing code many times per day.
In this presentation I will go over how the system helps us both in the context of microservices and monoliths, and how we made use of Consul, Hashicorp's HA service discovery / KV store, to make it more resilient and performant at scale.
As one of our primary data stores, we utilize MongoDB heavily. Early last year our DevOps lead, Chris Merz, submitted some of our use cases to 10gen (http://www.10gen.com/events) as fodder for a presentation at the MongoDB conference in Boulder. The presentation went well enough at the Boulder conference that 10gen asked him to give it again at San Francisco, Seattle and again in Boulder.
Hopefully there are some nuggets in this deck that can help you in your quest to dominate MongoDB.
Proactive monitoring tools or services - Open Source B.A.
Deel 1: (Open source) Monitoring tools in alle maten en gewichten [18:00 tot 19:30]
In deze sessie probeert Jan Guldentops op basis van zijn 20 jaar ervaring uit te leggen wat een monitoring oplossing in theorie zou moeten kunnen, waar u het kan toepassen en waar u moet op letten bij de selectie van een monitoring oplossing.
We overlopen de verschillende oplossingen op de markt ( open source, close source, hosted services, etc.) Daarna gaan we dieper in op de open source nagios oplossing en hoe wij bij BA deze geintegreerd hebben in ons eigen monitoringsysteem. Daarna geven we een korte demo van dit monitoringsysteem in een aantal verschillende omgevingen en hoe ver u kan gaan in het naar uw hand zetten van de oplossing.
Nona puntata del Mulesoft Meetup di Milano. Parliamo insieme a Paolo Petronzi di automazione e CI/CD e poi con Luca Bonaldo, il nostro Mulesoft Mentor in Italia, di best practices per batch processing.
Nagios Conference 2014 - Frank Pantaleo - Nagios Monitoring of Netezza DatabasesNagios
Frank Pantaleo's presentation on Nagios Monitoring of Netezza Databases.
The presentation was given during the Nagios World Conference North America held Oct 13th - Oct 16th, 2014 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/conference
Near real-time anomaly detection at Lyftmarkgrover
Near real-time anomaly detection at Lyft, by Mark Grover and Thomas Weise at Strata NY 2018.
https://conferences.oreilly.com/strata/strata-ny/public/schedule/detail/69155
Scaling FreeSWITCH to high cps and number of concurrent calls.
You'll learn about how the FreeSWITCH internals work and how to tweak them to improve different call scenarios. You'll learn about OS and environment changes that can help to remove bottlenecks and ensure audio quality.
OSMC 2014 | Naemon 1, 2, 3, N by Andreas EricssonNETWAYS
Wie sollte das Monitoring automatisiert werden, ohne die Genauigkeit zu gefährden?
In diesem Vortrag wird ein betriebsfertiges System vorgestellt, welches dem Systemadministrator ermöglicht Server zu konfigurieren, die automagisch von Naemon aufgenommen werden und ihnen gleichzeitig erlaubt ihre Einstellungen zu optimieren, ohne Zugriff auf das Monitoring-System zu benötigen. Bemerkenswerterweise sogar ohne einen erforderlichen Restart oder Reload des Monitoring-Systems.
Außerdem werde ich eine (hoffentlich) funktionierende Demo von dynamischen Schwellenwerten in Naemon zeigen, die verschiedene Parameter aus einem Request/ Response System zu Hilfe nehmen.
IBM MQ - better application performanceMarkTaylorIBM
Presented in Feb 2015 at Interconnect
This presentation is aimed at helping application developers understand how to best use MQ features for higher performance.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
3. Who we are
Since 2004, Loway is leading the way in development of advanced software solutions for
the Asterisk® PBX.
With QueueMetrics we set up modern standards for call center performance
measurement.
Our Mission is to put Swiss passion for precision and reliability at customers' service.
Together with Loway, you can provide your clients with the most reliable, flexible, and
sophisticated call-center management solutions available today.
4. What we do
Our History:
● Started working with Asterisk in 2003
● Developed QueueMetrics in 2005
● Developed WombatDialer in 2012
● Launched QueueMetrics Live in 2015
Installed base:
QueueMetrics currently deployed in thousands of
call-centers worldwide
Average site: ~50 agents
Largest sites: ~1000 agents live (on Asterisk clusters)
WombatDialer deployed in ~300 sites
Average site: ~80 channels
Largest site: ~4000 channels
Client base:
● 30% USA / Canada
● 25% Europe
● 20% LATAM
● 10% Africa
● 10% Asia
● 5% Middle East
7. The key process to improve run-time performace is:
Monitoring → Assessing → Understanding → Fixing.
●Monitoring builds a baseline of data
●Assessing lets you define how the problem appears
●Understanding – that's the hard part!
●Fixing is often trivial.
●Simply throwing better hardware at the problem will not usually help in
the long run.
●Don't wait for problems to start monitoring!
Monitoring & Performance
8. QueueMetrics was designed to be run in call-centers with up to
1,000 live agents.
Most performance problems in mid-sized CC's (up to 150 agents)
are caused by:
●Java memory (mis)configuration
●QM data caching turned off
●Broken MySQL indexes
●Wrong audio storage model
●JSON or XML-RPC wallboards
Performance problems?
9. Monitoring – Java Memory
QueueMetrics is written in Java.
●Java uses a fixed memory
pool that you need to configure.
●Target: no stop-the-world major collections. That's where the
system appears to hang.
●The RPM defaults are NOT okay for larger systems
●Sensible defaults:
-Xms4096M -Xmx4096M -server -XX:+UseParallelOldGC -
XX:PermSize=512M -XX:MaxPermSize=512M
●But you need to monitor them!
10. It is very easy to monitor remotely a production JVM.
Monitoring - jVisualVM
●Check heap size and parameters
●Monitor GC activity vs CPU usage
●Monitor threads:
–1 thread ~ 1 active request
–>5 users per thread
●Normal for memory to go up until GC'd
●Thread dump: what is QM doing now?
–Standard QM scripts let you take thread dumps
●CPU Sampler: why is it taking so long?
11. Internal tools: Admin page → System diagnostic tools
Monitoring – QM internals
RAM caches
–Data for all partitions
–SQL cache efficiency > 90%
–Cached strings < 1M
Live inspector
–See data being loaded
–Recent activity
–Quite expensive to run
12. Sometimes we find random MySQL issues.
●Symptoms
–Performance degrades strongly and all of a sudden
–Very high MySQL CPU usage and light Java CPU usage
–Lots of open threads
–Very high disk I/O on MySQL server
●Root cause
–Access indexes on tables present but broken / unused.
●What to do:
–Manual reindexing (drop indexes and recreate)
–Might take a while on very large databases
–..assuming I/O on the server is adequate (SSD!)
Monitoring – MySQL issues
13. If your audio recordings are all stored in a single folder, scan times can
get high...
●Symptoms
–QueueMetrics very slow when
opening the call pop-up window
●Root cause:
–Improper file storage
●Solution
–Split your recording in multiple folders, per day and queue, e.g.
/mount/audio/2016-01/30/queue1/….
–Use LocalFilesPerDay as the audio playback PM
You can also have multiple locations scanned in sequence, e.g. local
first, NAS next.
Monitoring – recordings
14. If you use a remote wallboard or any other piece of integration though JSON
or XML-RPC webservices, it should be well behaved.
●Symptoms
–Overall performance degradation
–VisualVM shows tons of open threads.
●Root cause
–Too many requests sent per unit of time
●Solution
–Services should cache local results if they need to feed multiple consumers /
wallboards at once
–Services must wait for an answer before submitting another – if not, it's sure
„Death by a Thousand Threads“.
Monitoring – Services
15. QueueMetrics offers a plain service to be used by
monitoring apps:
http://my.qm:8080/queuemetrics/sysup.jsp
●Lightweight JSON response
●Checks memory and DB connectivity
●Easy to integrate with existing monitoring platforms (Influx,
Riemann, Zabbix… you name it)
Monitoring – Sysup
16. We run hundreds of QueueMetrics instances in our service
QueueMetrics Live.
Monitoring – what we do
●Chose very basic metrics:
–total memory,
–free memory,
–web hits
–loader hits
–errors
●Easily spot 95% of issues
●Manually instrument if
needed
18. QueueMetrics is built around an all-encompassing security model
based on keys:
●System keys (access QM functions and pages)
●Custom keys
Users have a keyring:
–Default keyring = security class
–Can add an revoke keys for each user
–Masterkey
●All functionalities have a required system key
●All objects have optional keys ( = locks)
–Blank key = visible to everybody
The security model
19. Create different classes for different kinds of users
●Different clients
●Different access roles
●Start with the default ones (AGENT, USER, etc.)
Edit keys with the new editor:
●Plain-English view
●Set individual keys
●View inherited keys
Do not use the master key
Security best practices
20. Configuration
All configuration stored in the system folder.
●You can see which one is used in the License page
A text file holds the current configuration keys:
●File: configuration.properties
●General defaults
●General settings
●Picked up on user log-in
A different configuration file holds licensing information:
●File: tpf.properties
●Picked up on system restart
21. Config. best practices
Keep a backup of your current configuration!
Edit the configuration.properties file through Home → Edit system
parameters.
Edit common properties through the Explorer GUI: Home → Explore
system parameters
View the current configuration through the Home → System
Diagnostic Tools → View Configuration.
22. QueueMetrics was born to analyze inbound queues...
●In order to analyze outbound, we used special „outbound queues“
that used a piece of dialplan to do the tracking
●When you do outbound in a call-center context, you need a
„reason“ to distinguish calls
● You can originate outbound calls through phones or though the
GUI.
Outbound calls in QM
23. For best results...
●Use an underlying physical queue for outbound.
–Presence
–Pauses
–Hotdesking
–No calls!
●Name the queue „q-XXX“ where „XXX“ is the inbound queue
●Use the Icon page to originate, not phones
●If you need to place a lot of calls, use a dialer solution that works
with QM (e.g. WombatDialer)
Outbound – best practices
24. Agent configuration
Agents are used by QueueMetrics to
map names and properties to a PBX
code. An agent has...
●An agent code
●A name
●A supervisor
●A group
●A set of queues
●Etc...
Users are used by QueueMetrics to allow
access to the GUI.
A user has...
●A login
●A password
●Belongs to a class
●Has a set of security keys
●Etc...
What is the difference between an Agent and a User?
In order to let an agent use the GUI, its Login must match the Agent code (e.g. Agent/10
The QueueMetrics licensing model is based on Agents, not Users.
25. Agents – best practices
For best results...
●Use simple SIP phones
–Configure max lines = 1
–Check agent state on queues
●If using Local channels, avoid excess dialplan
●Use hot-desking
●Use the Icon page
–Easy log-on multiple Q's
–Easy status tracking
–Agent messages
–Outcomes
27. Asterisk logs the basic events for queue. QueueMetrics improves on this,
by tracking:
●Start of call (as opposed to queueing of call)
●DNIS
●IVR traversal and menus
●IVR goals
●Music on Hold
As they are not provided by Asterisk, they need to be added at the
dialplan level by adding INFO records.
Tracking more information
28. By producing a log entry like the following:
1353461650|1353461627.33271|NONE|NONE|INFO|IVRSTART|1234|5556777
You get:
●Actual start of call
●DNIS
●Call visible on RT before it's queued
Tracking DNIS
29. QueueMetrics is able to track the full life cycle of a call.
You need to:
●Send events when keys are pressed
●Send events when goals are reached
●Queueing is automatically considered a goal
Tracking IVRs
30. How often were your IVR
menus traversed?
When and where did people
stop?
How many self-service goals
were reached?
Tracking IVRs: Reports
How much did it take to
traverse each menu?
31. For many call-centers, measuring Music-on-Hold on answered calls is
important:
●Directly affects the perceived quality of service
●Anomalous MOH durations are an „alarm bell“ for underlying
problems
●QueueMetrics fully supports it
–Specific reports
–Call details
So… where is the catch?
●Asterisk does not produce these events for queues
Tracking Music-on-Hold
32. So how do we do it?
●Patches available for 1.4 & 1.8
–Never accepted into Asterisk
–Used in production in a number of 100+ channel CCs for years
–Hard to upgrade
●Passive monitoring daemon - MohTracker
–Uses AMI and works with any version of Asterisk
–Available as alpha software, free of charge
–Contact us if interested
Tracking MOH (2)
35. Quality Matters!
●Track calls
–Find issues before they find you.
●Track agents
–Improve training
–Assess strenghts and weaknesses
●Track queues
–Are we doing what we are expected to be doing?
–Can we show it?
If you are not using QA now, you are losing out.
QA – Introduction
36. Each call is graded on a set of metrics you define
–Each metrics has its own Engagement Code
–They map a numeric score (0-100)
–Same metric be used in multiple forms
Metrics are grouped into Forms
–Up to 10 sections with up to 130 questions
–Metrics are grouped into four grade bands (Issue, Req.Impr, Meets exp.,
Exceeds exp.)
–For each form, a score is computed
–Forms are immutable (more or less)
–Each call can be graded on multiple forms
QA – General ideas
37. ●Form input is flexible
–Questions can be hidden or shown
–Questions can be weighted in order to form a score
–Shortcut questions
●Immediate agent feedback
–Receive a task when a call is scored
–Agents are immediately engaged
–Agents must acknowledge or dispute it
QA – General ideas (2)
38. QA is a process and requires some planning.
●Define what you expect
–Don't make things too complex at first
–Tell your agents!
●Plan a call review process
–Don't oversample issues!
●Define initial QA review targets
●Plan in advance for corrective action to be taken
QA – Getting started
39. A simple form to review issues
●Track whether the problem is solved
●Track the sex of the caller
●If problem not solved…
–Track conversation metrics
●Different data input
●Non-scoring questions
●Show-hide questions
QA – A real-life sample
42. Define which items are a part of the form.
QA – Creating forms (2)
●Group items into
sections
●Move up and down
●Score contributions
●Visibility
Once a form has items in it, it cannot be changed.
44. Notes
●General notes
●Notes per question
Audio player
●Speed controls
●Playback controls
●Markers
Anatomy of a form (2)
What does a form look like? (continued)
45. Run Report → Queue details → Call Detail → QA
Form appears → Listen to the call → Fill it.
QA Input
Entering QA is really simple.
46. Results:
●Per agent
●Per queue
●Per group
●Per analyst
Showing:
●Averages
●Items by threshold
QA Results
To see results: Run QA reports → Criteria
47. Divided by section:
●By agent:
–Calls
–Average per question
●By queue:
–Calls
–Average per question
QA Results (2)
More results...
48. See all the graded calls that contribuited to a result.
Click on a call to see its QA form (questions, notes, associated audio files).
QA Results (3)
How did a result came to be?
49. ●Overall summary
●Scoring questions
●N. forms scored
–Averages
–% per threshold
–% shortcuts
●Non scoring
–Averages
–Histograms
QA Results - Summary
Overview of results
51. Deciding which calls to grade
Just clicking on random calls is not especially effective.
●Poor ergonomics
●Selection biases:
–Issues / „interesting“ calls
–Short / long calls
–First / last calls
Solution:
●Create a policy
–By call outcome
–By agent group
●Use the Grader's page
–Weighted random sampling
QA – Grading calls
52. Find efficent sets of calls to be graded
●Random sampling in call space
●Prefers calls matching multiple criteria
●Multiple Quality Analysts can work on
the same issue at once
●Shows target statistics
●First line of defence against quality
issues
–Ask for coaching
–Ask for CBTs to be taken
QA – Effective grading
54. Long-term agent performance management
●Agents need to be managed in order to work effectively
–Performance targets
–Quality policies
–Agents need training
●Agent lifecycle
–Training
–Validation period
–Review
–Production
–Periodic Review
QA – Agent Performance
56. The Performance tracker lets you:
●Apply Performance rules and find anomalies
●Send training and coaching
●Move agents between groups
●Be reminded for further review
QA – Performance tracker
59. QA is an effective process, but...
–Reviewing calls manually is very expensive
–Most calls are not very informative
–Biased sampling
–What if we could automate it?
●How can this work?
–Agent tranfers call to IVR at the end of the interaction – 70-80% of callers will accept
–IVR gathers simple information („Are you satisfied?“ - „Problem solved?“)
–Asterisk pushes this information to QM as a QA Form
●Targets:
–You have a way to review issues immediately
–You can monitor quality fairly and continuously
Surveys: Automating QA
60. ●Create a QA form called „SATISF“
–One section
–One single yes/no question - „Are you satisfied with the
interaction?“ (item SAT)
●Create a remote user in QM
–User „qasubmit“ password „passw0rd“
–Class ROBOTS
–Custom security key „QATRACK“
Surveys – QM set-up
61. Asterisk issue: IVR happen on a different channel:
●When calling the queue, store the original UniqueId of the call and
the Queue used on inheritable channel variables
●Allow unattended transfers on the queue (# + xxx)
●Download script pushQA.sh from the Open QueueMetrics Add-
ons on GitHub
●Make it executable and edits its configuration
●Create a simple IVR script to transfer to
Surveys – Asterisk set-up
64. Your services have a known peak time.
●You need to staff agents based on the load at peak time
●Your wait time SLA is stressed at peak time
●Harder interaction with callers who have waited too long
Solution: offer people the choice to be recalled
●Better SLAs – less staff required
●Happier customers
●Can be extended to web-based „call me back“
Automated recalls
65. Caller is offered the option to press '1' to be recalled
●If they do, a script starts to get their phone number
●When done, the number is piped to WombatDialer
WombatDialer monitors the recall queue for free agents
●When agents are available on it, recalls starts
●Wombat knows how to handle busy, no answers, etc.
When the call is started...
●A script plays a welcome message on connection
●The call is queued on the Recalls queue
Auto recalls: the logic
66. QueueMetrics
●Situational awareness
●Agent management
Asterisk
●Script to track caller's numbers
●Script to greet people being recalled
WombatDialer
●Monitors presence on Asterisk queues
●Implements recalls
●Handles busy, no answers, invalid numbers, etc.
Auto recalls: components
67. QueueMetrics configuration
●Create two queues – one for inbound and one for recalls
●Monitor them through the Real-Time page
●Supervisor moves agents to and from the Recalls queue based on
the state of the Inbound queue
●Everything else happens automatically
Auto recalls: QM
68. WombatDialer
●Runs multiple independent campaigns in the background
●Receives numbers to be called and extra information for handling
them over HTTP
●Monitors agent continuously
–Are they available? Paused? Busy?
●Handles recalls with sensible rules
–Retry in 10 minutes if you get a busy
–Retry in 30 minutes if the number is free
–Call only during allowed times
●When successful, connects to the welcome script
Auto recalls: WombatDialer
69. ●Solution can be implemented in very little time
●Does not require extensive changes to the PBX config
●Use the same tools you use for inbound
●Very extensible solution
–Reverse IVRs
–Voice synthesis
●Excellent ROI
●
Recalls – all together
71. How can we handle VIP callers?
●Special low-latency queues
●Different products
●Automatic routing of open cases
●How can we identify callers?
Usually done with dedicated DIDs, but...
●Hard to scale to hundreds or thousands of cases
●We can definitely do better!
VIP callers
72. VIP callers in QM
QueueMetrics offers „Known numbers“, an easy-to -query
database of known numbers.
●Associates an action and an identity
to the caller
●Action can be time-limited
●Optional „agent affinity“ to first try the
agent who is currently handling the
case
●Database can be fed through JSON
API
73. VIP callers in QM (2)
For security reasons, you need a special user to acces
QueueMetrics through the APIs.
●Create a remote user in QM
–User „pbxapi“ password „api123“
–Class ROBOTS
–Custom security key „PBXAPI“
74. VIP callers in Asterisk
You query QueueMetrics before routing a call:
same => n,Set(CURLOPT(hashcompat)=yes)
same => n,Set(NUM=${CALLERID(number)})
same => n,Set(URL=http://my.qm/queuemetrics/numberLookup.do
?mode=hash&user=pbxapi:api123))
same => n,Set(HASH(resp)=${CURL(${URL}&number=${NUM})})
same => n,Set(CALLERID(name)={HASH(res,name})
same => n,GotoIf($["${HASH(resp,action)}" = "VIP"]?supervip)
same => n,GotoIf($["${HASH(resp,action)}" = "BLACKLIST"]?blklist)
same => ....
So with one single call you get:
●Name of the caller
●VIP or Blacklist
●Agent affinity
And you can route accordingly!
75. VIP callers in Asterisk
You query QueueMetrics before routing a call:
same => n,Set(CURLOPT(hashcompat)=yes)
same => n,Set(NUM=${CALLERID(number)})
same => n,Set(URL=http://my.qm/queuemetrics/numberLookup.do
?mode=hash&user=pbxapi:api123))
same => n,Set(HASH(resp)=${CURL(${URL}&number=${NUM})})
same => n,Set(CALLERID(name)={HASH(res,name})
same => n,GotoIf($["${HASH(resp,action)}" = "VIP"]?supervip)
same => n,GotoIf($["${HASH(resp,action)}" = "BLACKLIST"]?blklist)
same => ....
So with one single call you get:
●Name of the caller
●VIP or Blacklist
●Agent affinity
And you can route accordingly!
77. Thank you for attending!
QueueMetrics www.queuemetrics.com
Loway www.loway.ch
A real programmer puts two glasses on his bedside table before going to sleep.
A full one, in case he gets thirsty, and an empty one, in case he doesn’t.