Reliable Relevant Metrics to the Right Audience - Manual Testing WhitepaperIndium Software
'What cannot be measured cannot be managed” is the guiding philosophy behind testing metrics, a phenomenon that promises to deliver business efficiencies beyond just improving quality. Measurement helps with planning, tracking and managing the software project and enables organizations to objectively assess quality
US Education Centre is the Best GMAT Coaching Centre, GRE Coaching Class, GMAT Traning Institute in Gurgaon. We provides Study Abroad Test Preparation Coaching. Join your dream college.
Tips and tricks for successful uat testing 2.1panayaofficial
Success in user acceptance testing is all about orchestration - deciding and letting your team know who does what and when.
With the right practices and tools, running successful UAT projects can be effortless, efficient, and fast.
Reliable Relevant Metrics to the Right Audience - Manual Testing WhitepaperIndium Software
'What cannot be measured cannot be managed” is the guiding philosophy behind testing metrics, a phenomenon that promises to deliver business efficiencies beyond just improving quality. Measurement helps with planning, tracking and managing the software project and enables organizations to objectively assess quality
US Education Centre is the Best GMAT Coaching Centre, GRE Coaching Class, GMAT Traning Institute in Gurgaon. We provides Study Abroad Test Preparation Coaching. Join your dream college.
Tips and tricks for successful uat testing 2.1panayaofficial
Success in user acceptance testing is all about orchestration - deciding and letting your team know who does what and when.
With the right practices and tools, running successful UAT projects can be effortless, efficient, and fast.
Creating Thematic Units Using Inquiry - BCTELA October 23, 2013Jonathan Vervaet
A presentation given and created with English teachers at BCTELA Conference in Ladner, BC on October 23, 2013. Topics were curriculum design, essential questions, and thematic units.
Join Stacey Brown, President of MindLink Resources, for a webinar that will examine the top 10 qualities of a quality assurance (QA) tester. Learn how to bring out these traits in your current QA staff and how to watch for these soft skills when screening new candidates.
When localizing products, the QA step is essential in confirming the translation and making sure the product was successfully prepared for the target market. Managers trust the QA staff to catch translation and engineering errors and ensure product readiness to avoid quality issues caught by the end customer. Many managers make the mistake of assigning this critical role to a linguist who may not have the right characteristics of a good tester. When selecting QA staff, it is important to consider skills beyond just linguistic and technical. There are many “soft skills” to watch for in a candidate that will give localization managers the confidence that even small errors will be reported by their tester.
In this webinar, Stacey will discuss the top 10 qualities of a quality assurance (QA) tester, how to bring out these traits in current QA staff, and how to watch for these soft skills when screening new candidates.
About the presenter
Stacey Brown is the Talent Management Specialist and President of Mindlink Resources, LLC.. She has a passion for surrounding herself with talented people. For the past 15 years she has successfully built teams of contractors providing a variety of services at large fortune 500 companies in the Pacific Northwest. She specifically has over 12 years of experience recruiting, training and managing QA specialists. Stacey has a degree in Communications and an MBA in Technology Management.
Measuring Roi Of Training & Development Ravinder TulsianiRavinder Tulsiani
How to calcluate the return on learning investment (ROI). Companies allocate certain amount of funds and resources to the training budget, what they want to see is how the training impacts their core business objectives (eg. growth, reduce risk etc...) Learn how…
How to measure the outcome of agile transformationRahul Sudame
This presentation covers details on how we can measure that Agile Transformation is providing the intended outcome or not. I presents a research & survey which tries to understand how different people measure value of Agile Transformation
Creating Thematic Units Using Inquiry - BCTELA October 23, 2013Jonathan Vervaet
A presentation given and created with English teachers at BCTELA Conference in Ladner, BC on October 23, 2013. Topics were curriculum design, essential questions, and thematic units.
Join Stacey Brown, President of MindLink Resources, for a webinar that will examine the top 10 qualities of a quality assurance (QA) tester. Learn how to bring out these traits in your current QA staff and how to watch for these soft skills when screening new candidates.
When localizing products, the QA step is essential in confirming the translation and making sure the product was successfully prepared for the target market. Managers trust the QA staff to catch translation and engineering errors and ensure product readiness to avoid quality issues caught by the end customer. Many managers make the mistake of assigning this critical role to a linguist who may not have the right characteristics of a good tester. When selecting QA staff, it is important to consider skills beyond just linguistic and technical. There are many “soft skills” to watch for in a candidate that will give localization managers the confidence that even small errors will be reported by their tester.
In this webinar, Stacey will discuss the top 10 qualities of a quality assurance (QA) tester, how to bring out these traits in current QA staff, and how to watch for these soft skills when screening new candidates.
About the presenter
Stacey Brown is the Talent Management Specialist and President of Mindlink Resources, LLC.. She has a passion for surrounding herself with talented people. For the past 15 years she has successfully built teams of contractors providing a variety of services at large fortune 500 companies in the Pacific Northwest. She specifically has over 12 years of experience recruiting, training and managing QA specialists. Stacey has a degree in Communications and an MBA in Technology Management.
Measuring Roi Of Training & Development Ravinder TulsianiRavinder Tulsiani
How to calcluate the return on learning investment (ROI). Companies allocate certain amount of funds and resources to the training budget, what they want to see is how the training impacts their core business objectives (eg. growth, reduce risk etc...) Learn how…
How to measure the outcome of agile transformationRahul Sudame
This presentation covers details on how we can measure that Agile Transformation is providing the intended outcome or not. I presents a research & survey which tries to understand how different people measure value of Agile Transformation
Which Performance Appraisal Style Suits Your Company?CRG emPerform
Many appraisal types exist; from traditional to trendy, simple to complex, highly structured to open-ended. Here is a great overview of the most popular and common appraisal methods for a variety of business models.
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
2. Overview
Problems of QA Evaluation
New Evaluation Pros vs. Cons
Criteria and Calculation
Using the tool
Questions and Answer
3. Problems of QA Evaluation
The evaluation criteria are unspecific.
QA Managers’ evaluations are based on
emotion.
QA Managers evaluate in short term instead of
the whole year of good performance.
Testers complain that their evaluations are
not clear and unfair.
4. New Evaluation Pros vs. Cons
Pros
Automated KPIs tool.
Emotionless evaluation.
Easy to follow-up.
Easy to compare the real
performance.
Fair for every tester, no
more demotivation.
Easy to double check.
Cons
More “paper work” for
QA Managers.
Must follow and update
regularly.
Must organize massive
contests
Evaluate up to QA
Senior.
6. Late at Work – 5%
QA Tester is paid by real working hours.
If a tester go to work late: 0%
If a tester never go to work late: 5%
“Late at Work” definition:
Present in the office after 8:10 AM without notice.
Take day/half-day off without notice or
explanation.
7. Support Others – 5%
Based on 360 degree Feedback.
The final score will be the average of all Feedbacks.
Formula:
Support Others 360 Feedback Average x 5%
8. Working Attitude – 15%
Based on 360 degree Feedback.
The final score will be the average of all Feedbacks.
Formula:
Working Attitude 360 Feedback Average x 15%
9. Need Review Rate – 10%
Apply for QADB Junior only.
For QADB Senior, use Team’s Bug Quality Average instead.
Need Review formula:
[100% - (No. of Need Review/No. of Total Bug x 100%)] x 10%
Team’s Bug Quality formula:
Team’s Bug Quality Average x 10%
10. Personal Bug Quality – 15%
Based on the result of Bug Scanning Contest.
This is a cross teams contest.
The contest must be arranged each 02 months.
Formula:
Bug Scanning Contest Result x 15%
11. Personal Bug Quantity – 15%
This is a two-step calculation because:
Each team has its own project/game and number of bugs.
When tester takes a day off, another one has to support
and do his/her task.
Duplicated bugs are not counted.
Testing Time = Normal Working Time + Overtime Working
Time
Bug Per Hour formula:
Tester’s Total Bug Caught/Team’s Testing Time Average
Personal Bug Quantity formula:
Tester’s Bug Rate/Team’s Highest Bug Rate x 15%
12. Checklist Contest – 20%
Based on the result of Android Checklist Contest.
This is a cross teams contest.
The contest must be arranged each 02 months.
Formula:
Android Checklist Contest Result x 20%
13. Report Skill – 5%
Give the score base on the ranges (Good, Above
Average, Average, Below Average, Bad)
Using 360 Degree Feedback method.
QA Supervisor/Lead evaluate the reports of each
tester.
Formula:
Report Score x 5%
14. Workshop Quantity – 5%
Workshop must be validated by Trainer Team.
Encourage sharing knowledge/experience.
Formula:
𝐏𝐫𝐞𝐬𝐞𝐧𝐭𝐞𝐫′ 𝐬 𝐍𝐨. 𝐨𝐟 𝐖𝐨𝐫𝐤𝐬𝐡𝐨𝐩𝐬
𝐓𝐞𝐚𝐦′ 𝐬 𝐇𝐢𝐠𝐡𝐞𝐬𝐭 𝐍𝐨. 𝐨𝐟 𝐖𝐨𝐫𝐤𝐬𝐡𝐨𝐩𝐬
× 𝟓%
15. Workshop Quality – 5%
Trainer must join the workshop and give the Presenter a
score for:
Presentation skills
Quality of the workshop
If Presenter organize more than 01 workshop, the final
score will be average of his scores.
Formula:
Presenter’s Workshop Scores Average x 5%
16. Management Skills – 5%
This score is a bonus based on 360 Degree Feedback.
Only the main key get this bonus.
QA Supervisor/Lead must register the name of the main
key at the beginning of Evaluation term.
Formula:
Management Skill 360 Feedback Average x 5%
17. Day Off & Comments
Day Off is not counted in Total Score of a tester.
Comments is not counted in Total Score but it’s
mandatory.
Comments should be in compact form.
QA Supervisor/Lead can collect comments from team
members (via 360 Feedback emails).
19. Enter Necessary Information
Teamlist Sheet:
1. Replace <YourName> part with your real name.
2. Change the Start and Finish time of Evaluation
period.
3. Enter the correct information of each tester.
Position means the real position of the tester.
QADB Position means the position of the tester
on QADB
20. Enter Necessary Information
Input Sheet:
Enter necessary information follow instruction.
Do not change the format of data.
Remember that blank and 0 are very different.
21. 360 Degree Feedbacks
1. Feedback request should be given from QA
Supervisor/Lead to tester.
2. Each tester can feedback about all members in his
team (include himself).
3. Tester will give feedback follow the template.
4. QA Supervisor/Lead will sum-up and use the result for
the evaluation.
Rank Average
Good 81 - 100
Above Average 61 - 80
Average 41 - 60
Below Average 21 - 40
Bad 0 - 20
23. 360 Degree Feedbacks
QA Supervisor/Lead can have his/her own evaluation for
each tester.
The evaluation of QA Supervisor/Lead must be another
360 Feedback email or in excel file.
QA Supervisor/Lead point of view is treated equally with
tester’s.
The Evaluation Tool and all 360 Feedback emails
(compressed in a zip file) must be send to all QAPMs as
attachment.
24. Understand The Result
The results of each 02 months are automatically
calculated in Monthly-Points sheet.
Final results of Evaluation Term (06 months) are calculated
in SUM-UP sheet.
Total Sum-up Chart compare all the testers in team.
To see how a tester improves his/her performance though
Evaluation Term, enter the name of the tester into cell
C58.