DigitQ.in
!Digitz                                         Vol 1            Aug 2019
Assuring DevOps
Delivers Well!
DevOps Excitement
The moment the name DevOps we here, there is
an excitement in our minds, and we are certain
another personal certi cation getting added in our
career. After overcoming the initial days of
excitement, I started to learn DevOps in the way
they teach to the dev & ops team. Within a few
hours it pushed me into the land of codes and
mountains of tools and plug-ins.
Automation and lots of automation and code move
from one state to another state based on
validation conditions met and nally it reaches to
staging or production environment. This is it,
Automating the pipeline of development and
release cycle to the production environment with
many tools, so no one manually intervenes or
delays the ow. When I understood this way, the
rst reaction is the role of a quality analyst? Does
this mean we are losing our jobs with DevOps?
What can I contribute in the world of automation
and tool pipelines? There were many questions.
The answer is, ‘as long as human is involved,
variations and contexts involved, activities and
outcomes involved, there is a role for Quality
Analyst to perform constructively,’
DevOps Focus
DevOps implementation in Engagements has
several key driving factors. However, few of the
important drivers are,
 Purpose
 Culture
 Automation & Environment
 Architecture
In this article, we will focus on the role of Quality
Analyst in a DevOps engagement. The journey
starts either from scratch or transformation from
an existing delivery model. The Quality Analyst can
play an important role in Transformation and in
run mode of the DevOps engagements. Often,
engagements go for DevOps, because it's sold
futuristically to client stating DevOps is the future
and we have it for you now. With the reality of
delivery, they are not sure why we need DevOps.
How many times continuous delivery or
continuous deployment done in a week and do we
really need it, is a question arises at a later stage.
Next comes culture, where the team is not ready
for shorter cycles and never worked using
collaborative and automated tools, then there are
challenges on the rise. The simpler way is to make
them use to Agile Scrum kinds of short cycles
where requirements collection to releases they get
used to. Automation is the next driver which often
people misunderstand that once automated, the
pipeline will remain the same.
Q
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
It depends on how compatible each of the
software packages is and how they get upgraded
and what trainings we provide to people. Also,
there is someone to monitor the pipeline runs
without issues. Next comes Architecture with
Micro-service architecture, dockers, shipping ,etc
where the architectural team has to take complete
advantage of the model and the environment they
use. These drivers also help a quality analyst know
, each part has a role, failure points, needed
improvements, monitoring performances,
veri cation and validation needs.
DevOps Work ow
There are multiple context exist when we talk
about DevOps, so here we will take the case of
DevOps engagement using Agile Practices and
Cloud Environment for development and delivery.
The Engagement has to ensure it has the
environment as per its expectations achieved and
they sign Performance SLA with cloud providers.
The licenses needed for the tools are managed by
cloud provider or any of existing licenses will be
used from organization has to be decided. We
design the work ow architecture with the number
of user, server required, tools con guration to
enable the pipeline set up. Automating the
work ow of development and delivery doesn’t
mean the requirement, design and coding will get
automated.
The human e ort, prioritization and application of
skill to develop product/application features to be
paid adequate attention. Application of the Agile
method helps stories/smaller functionalities,
simpli ed development cycle and supports
continuous integration. Agile Methodologies
(Kanban, SAFe, etc can well manage the upstream
part of development). This improves the
application of DevOps automation , as periodic
delivery and deployment is possible with
automated pipelines.
A Quality Analyst can spend adequate time in
ensuring the agile practices or suitable shorter
cycle time methods followed, resources available,
pipelines are checked, licenses are available,
Automated veri cation tools (code quality),
continuous integration & delivery tools with
reporting abilities are maintained in the
engagement. Further to make the system of
DevOps Succeed the stakeholders and business
analyst shall provide features/functionality needs
on time to a development team.
The regression test cases release schedules to be
maintained for the context. In addition, usage of
cloud should control security practices to ensure
authentication & it maintains authorization
practices.
Continuous monitoring of the deployed code and
addressing of the incidents can be handled as per
typical IT service management practices.
Continuous improvement plays an important role
in reducing the waste and minimizing the failures
at any stage. In one case, we saw there were too
many quality gates and approvals, which delayed
& denied the faster deployment bene t itself.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
Quality Assurance in
DevOps
As Quality Analyst what do we need to ensure in
DevOps projects are Team Culture & Readiness,
Process Flow & Criteria, Con guration of Scripts,
Veri cation and Validation practices of pipeline &
Product/application and KPI based Analysis and
improvements. There are organizations that have
developed ‘DevOps Maturity’ assessments and
scoring models. It's de nitely a good practice to do
the maturity assessments, as it give the
benchmark for improvements.
The following are key elements we need to check
as Quality Analyst in DevOps engagements.
To ensure that the DevOps work ow is clean and
adequate clarity is there, then we need to have
 De ned Work ow
 Pipeline con guration blueprint/architecture
 Work instructions of pipeline
 Training material/Guideline
Veri cation & Validation in Development practices
shall have the following:
 Validated User Story/
 Defects log
 DOD criteria
 Automated Code Review Report & Action
 Unit Test report & action
 Test cases review & Versioning
 Traceability of user story vs automated test
case version
 Build failure log
 Deployment Roll back log
 Test failure report
Change management will play an important role,
as the context variation and need variation will
evolve; hence the following shall we shall check
hence the following
 Environment Change
 Pipeline ow change
 Approval for changes
 Deployment of changes
 Impact analysis of change
We shall check continuous delivery practices for
the following,
 Approval process for con guration
 Code / Scripts Review
 Code / Scripts con guration
 Access management of tools/environment
for team
 Policies for Dev /Test/Ops group
We shall perform security Con guration checks on
the following:
 Access to Tools–log
 Security key storage
 Licenses management
 SLA for cloud( as Applicable)
Monitoring of the pipeline and health of the
pipeline to be checked with the following:
 Application Performance Report
 Automated Health Check Report of
application
 Roll back failure analysis
Besides the above given checks, it's important that
we have adequate measures as an indicator in
DevOps engagements.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
 Lead Time to Requirement to Deploy
 Deployment Rate per day or week
 Build failure in a period
 Deployment Roll back & others
DevOps thinking for
QA
DevOps is neither a magic wand nor it’s a
complete Automated Development Platform on
today’s context. There are still human elements,
Synchronization elements, Tool Con gurations of
process activities, which needs attention from
Quality Analyst. One small error in con guration
can create chain events until production and can
create business impact even more than once.
Hence, it's important that all aspects are validated
well before it launches the platform for
development. Similarly, the validation practices
would have dynamic features testing with aids;
Hence, there needs human intervention and
maintenance of con gurations. In every context
there needs to high traceability maintained and
sometimes the domain would also want the logs of
every activity with relevant approvals. This needs
attention from Quality Analyst. DevOps in a
redesign’s quality assurance, as the skills to know
technology like the cloud, the method like agile
and the set of tools in CI/CD before we jump in
and focus on Speed of Delivery, Security of
practices, Failures based Improvements.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
Quality is
Retaining Clients
Business Value of
Quality
The contractual terms and delivery models have
drastically changed with technology upgrades. The
challenges faced by Quality Assurance few years
before is no longer seems to be critical. The
involvement of the client in delivery has brought
in better functionality delivery and control over
milestones. Quality Assurance in many
organizations focused as multi-point weighted
activity, where we try to cover each and
everything.
However, the critical focus areas have loosened
up, as the defect or schedule slippage is not the
concern, but few others are. So, we have Quality
Assurance which has no focus, but delivers
compliance score. In some places, it is a just task
by task observations and misses the point. So
where is the Business Value Addition from
Quality? Can quality assurance have a critical
focus and show how it's important? We will
discuss here.
Strategizing Focus in
Project
Quality Assurance in a project to have a clear
focus to enable the project team to succeed in
their work. Let the delivery model be any there can
be two ways in which we can look at quality
assurance, a) circle of risk and b) circle of
improvement. Often we believe the project is
stable, so assurance team has nothing to do or the
project is in re ghting so they don’t want
assurance person to visit them. Both scenarios
show how missed the chance to in uence and
deliver better. Most assurance teams get confused
in setting up a clear focus and how to build on to
it.
To make the case simple, we will assume the
clients give us 3 or 5 year contract of digitizing
their application portfolio and mange the existing
Infra and applications. The focus for the
assurance team should be progressive quality
assurance to enable the project to develop
abilities and perform to the context and build
maturity. The result would be obvious; the project
shall be able to renew the contract successfully
with the client irrespective of market competition.
The progressive journey of assurance is not an
accident, but a strategy to build focus and abilities
to perform in the next level.
The Assurance roadmap shall enable stabilizing
the account and converting the account to
capable. From there lead towards contract
renewal.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
In the progress of unstable account to stable
account, the application of “Circle of Risk” as a
focus is critical to address the challenges and
make compliant and productive outcomes. In the
progress of Stable account to Capable account, the
application of “Circle of Improvement” is critical to
address the opportunities and excel the
performance.
Circle of Risk and
Circle of
Improvements
The parameters of projects don’t change for a life
cycle; however, the context changes and
environment changes. So let's take anyone types
of life cycle like development projects. There are
various parameters and components we as
assurance team monitor and support. However,
the support is valuable only when we associate a
purpose and show the impact it can make.
The Circle of Risk focuses on identifying failure
points and arrest it. The project shall be able to
meet its expectations and wherever it can fail and
whatever can make it fail, are proactively analyzed
and associated risks are highlighted. The risk-
based focus is very important in building stability
in the project.
We focus circle of Improvement on improving the
performance to move from the stable state to a
capable state , where client expectations are
ful lled e ectively and e ciently. The circle of
improvement is about using opportunities and
being foresighted to achieve the results and
getting ready for renewing the contract.
The faster we achieve stability by applying the
circle of risk, the more cycle time is available to
improve the performance with the circle of
improvements. This would ensure that the client is
more delighted and we as the service provider has
more understanding about the systems and can
develop better contractual negotiations.
The Circle of Risk focus would be with the
following areas,
 Resource Skill & Availability
 Dependencies & Timelines
 Standards & Procedure
 Tools Application
 Security Awareness & Adoption
 Requirement Clarity
 Validation Methods
 Customer Collaboration
 Compliance, Security & Regulations
 Reports and Actions
 Problem Analysis
 Governance and structure
The initial few months to year, the project has to
ensure they are on top of building stability.
The circle of improvement focus would be with the
following areas,
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
 Industrialization
 Automation
 Knowledge Database & Reusable
Components
 Tools Enhancement
 Technology upgrade
 Innovation
 Process changes & Upgrade
 Client intimacy
 Continual Improvement
 Simpli cation & Integration
 Cost Saving
The more year on year savings and make the
project ready for contract renewals are done with
the above given practices.
Contributing in
contract renewal
This is not about communicating what certi cation
in assurance the organization has to a client. It's
about building on to the “Circle of Improvement”
to focus on competitive and unique solutions for
the contract. The assurance team can play a vital
role in the contract's journey renewal process.
Taking ahead from Circle of Improvement , the last
6 to 8 months ahead of contract renewal time, it's
important that we start baselining the project
parameters.
These data are vital for the client and for the
organization to know and take any calculated risk
in the delivery and pricing model. Every
organization undergoes many changes in a few
years of the contract period of the project; hence,
there can be many new improvisers or changes
available to the project team. These has to taken
in to consideration as improvisers. Ideally, 5 to
20% cost reduction of a similar scope is possible
with the improvisers and people's knowledge gain.
The assurance team not only baseline and
supports in the binding of improvisers from
multiple corners, they also can take part in
developing process architecture, building request
for proposal components and involve in due
diligence participation. The assurance team can
review the delivery model for successful delivery.
Roadmap to Renewal
with Assurance
Contract Renewal is not the last 2 months
performance based activity, but it’s the outcome of
client realization of service/product of quality for
the given cost and meets their business need. The
project has to cross the hurdles of instability and
reach stable delivery and then reach performing
state. The Assurance team has to ensure these
transformation happens and the goal they want to
achieve is , to enable renewing the contract. Its
easy to loose the path or relax by delivering in the
same manner, however this wont make the
organization has the undeniable leader to get the
contract renewed.
The Circle of Improvement and combined Contract
renewal focus can get enormous success.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
Cooking an
Arti cial
Intelligence (A.I)
Recipe for Client
Delight
There is a huge amount of client satisfaction data
available in major companies. It even spreads
across years. One of the important expectations
from any quality analyst is to oversee how the
product or service is realized by the client. And
what opinions do they carry about the service
provider? In many organizations, the client
satisfaction score is still a surprise result, and then
re ghting or appreciation chain starts. They store
the data for overall baselining of how many satis ed
clients they have. However, rarely we apply can we
predict what the client will feel and rate our
product/service based on existing data.
Hold on, every client is unique and every point of
contact of the client can be di erent; This is an
invalid argument. I too agree with the view, they are
unique. However, among the uniqueness also there
can also be a pattern and common likings and
appreciations.    Let's take few common examples,
across globe people have di erent behavior,
culture, likings, values but still there are books
which sell across world, there are services and Food
sold across, there are movies which are making
maximum revenue irrespective of regions.
This is possible, as we like few abilities and
characteristics across the regions. Does this mean
the globally successful products are successful in
every region at the same level, and are they better
successful than regional products? The answer is
No. The regional variation, culture variation, likings
all still has in uence but still there are something
common always. This is the reason I feel A. I recipe
is better suited for making the service for client
delight.
I believe the one who reads has a basic
understanding of the Connectivity between A. I vs
Machine Learning vs Deep Learning. In case if you
are not, then go through the youtube video
https://youtu.be/WSbgixdC9g8. When a project
starts with a client, by knowing the key drivers, we
should be able to tell if we will achieve client
delight or we will fall short of it. This would help
us follow the drivers and take actions that can
better o in handling it.    As our intention is to
know the drivers and result achievement, going
ahead with deep learning is not a choice.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
Instead, we can apply machine learning technique
like Decision Tree. As said earlier, we believe we
have a reasonable amount of data for us to
construct the decision tree. The data shall have
relevant characteristics (Drivers) like Sector
(private vs public), Domain (Healthcare, aerospace,
etc), Type of service (Application maintenance,
Product development, etc), Region (countries or
states), Technology (Digital, cloud, big data,
mainframe,  .Net, etc), year of contract (1st year,
2nd year, etc), Type of contract (Fixed price, Time
and Material, etc), Method (Agile, DevOps,
Incremental, etc) and many more relevant data.
It's always certain there will be few who will look
for a pattern in every organization, however
pattern in a condition of other variables are
di cult for simple visual inspection. It needs
better application models like a decision tree to
provide insights and give results quickly. To know
more on the decision tree, watch the youtube
video https://youtu.be/DCZ3tsQIoGU.
We can use the existing data in the organization for
training the decision tree and for this we might split
2 parts for training and 1 part of data for testing.
We can use the Scikit learning library with Python
for supervised and unsupervised learning of
data.    The decision tree was established and
visualized with the decision branches. The nodes
from which the branches starts are the drivers
which we need to watch out for and their values
which leads to client dissatisfaction shall be
controlled or actions to be taken to balance it out.
For example, we might get the result like in 15
percent of client dissatisfaction the key
combination of drivers were, Private Sector >
aerospace> product development> Germany>
Cloud Technology> Time & Material > Incremental
models. In this when the incremental model
changed to Agile, the value is much lesser.
Such insights about client satisfaction are gold for
any quality or delivery person to work towards
building a better recipe for developing software
with the client.    The strength of these machine
learning models is that it can read a volume of
data and correct the learning to give better
results.  
The application of Decision Tree is only an
example, like that they are many algorithms exists
which are better or comparative.    The reason we
are talking about it here is, that we as a quality
analyst shall not just baseline client satisfaction
and leave it there. We can predict the behavior
and we can nd the in uencing characteristics
which makes the good recipe for client delight.
Let's explore A.I for application in Quality Engine.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
Agile Culture
adoption and
Client
contentment
Author – Aarti Patil
Agile has become default choice for development,
simply because it delivers more success rate than
other delivery models. The 2018 chaos report
(Standish Group) classi ed projects with agile
under 42% succeeds, 50% challenged and only 8%
failed. This is lucrative o er for everyone to start
going agile way. The core of agile success revolves
around Agile culture and it’s very robust but at the
same time impenetrable to implement without
well-built senior management support. Few
important basic factors which can improve
probability of success while implementing are
strong reason for mindset shift, clear visibility on
new ways of working, upskilling on process and
tools , motivation, compensation and reward
programs.
Business readiness is crucial where IT
organization needs to demonstrate how agile
development will drive value with pros and cons to
business rather than just selling agile. Business
speci c training programs and workshops on agile
methodology can facilitate to follow agile
diligently.
This will make enormous transformation in the
way IT and business interacts, which will fetch
quality deliverables to business with speed to
market. Building the right skills on emerging
technologies with combination to team
restructuring will bring in more awareness about
agile culture at organization level.
Equalization of agile culture and client satisfaction
is the toughest challenge with respect to reality.
Organizations are o ering agile development
solutions to business, but agile is actually
executed in agile way? Is it steadily becoming
hybrid? Or eventually moving back to waterfall?
Timeboxed deliveries or stand up meetings alone
don’t make up as agile culture, in fact they create
false con dence.
End to end agile delivery is hard to adopt. Today,
IT organizations and other industries are
acquiring the business to deliver software faster
and meets market demands, same time it is vital
for both vendor and client to understand agile
methodology and various frameworks used within
to execute the projects. Client satisfaction is
critical for long term business association.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
IT Organizations set focus on agile requirements
gathering in form of user stories which should be
clear with supporting artifacts, align with business
expectation, estimable, testable and utmost
important is infrastructure need, well de ned
environment setup for quick transition which
sometimes is major reason of delayed delivery to
business further it a ects cycle time and cost.
User stories should be independent as much as
possible. Handling client expectation in such
scenarios becomes di cult, so thorough
understanding of client business, expectation,
industry background and market trend should be
prioritize by involving experts, which will make the
project execution and transition smoother. This
will also ensure and open new opportunities and
areas in business by respecting client
contentment in future. Agile culture is not
independent of client contentment, but it’s the
way we collaborate and adapt to achieve client
expectations through agile practices.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
Managing
E ective Cloud
Migration with QA
Validations
  Cloud will cover the IT world soon, with 60%
global market growth in next 3 years and 80%
organization to move to cloud by 2025(Reports
from Gartner and computer world UK). Many
organizations have principally embraced the
cloud, but often there is one challenge ‘The Detail’.
Organizations choose cloud for multiple reasons
including data center reduction, increase global
presence, need of processing power, cost bene t,
etc. As they move from principal agreement to the
evaluation of public or hybrid cloud service
providers and services they need, they have to get
in to detail.
The details include the various applications,
servers, dependent con gurations, criticality,
compliance, users, etc. Getting an understanding
of their own IT systems and their criticality takes
adequate time. Then they need to prioritize what
applications or servers have to be moved to cloud
rst. Which also involves who is the cloud service
provider and what kind of migration support they
will provide.
Most organizations select the less critical
applications that often need changes (ex:
Websites, etc) and then get comfort towards
migrating other applications or servers.
Where many organizations hesitate and continue
with their on-premise IT System is also when they
know their landscape of IT well and then not
having much clarity on Migration and what kind of
outcome they can see with less pain.
Cloud service providers understand the problems
and expectations of the organizations. Most of the
top service providers have come out with their
own migration life-cycle with phases and
deliverables list. They start from an initial
assessment to the tool which can simplify the
migration activities. These are important to boost
the IT team’s con dence in the Business
Organizations. However, migration is not a simple
process even with current level of capabilities,
hence a stronger life-cycle based phased
approach can enable smoother migration.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
As a IT organization most of them involved are
involved as Service Providers who help Business
Organization to take service from Cloud Service
Providers and migrate the on-premise
application/servers, etc to Public or Hybrid or
private cloud. The image given here is a generic
approach of the existing cloud service provider’s
migration model. The initiation phase involves
understanding the needs and establishing
agreements between all parties. The Discovery
and Assessment phase involves, Portfolio
Assessment, dependencies and CloudFit
assessment then migration items pipeline.
The planning or design phase involves, Migration
plan with acceptance, a migration strategy and
landing zone architecture, a training plan and a
pilot go/no go. The migration phase involves setup
infrastructure instance migration ready, migrate,
rightsizing of service. Here it can refactor or re-
platform or re-host or repurchase. The
Integration/Validation phase involves Integration
of IT, Cut Over, UAT Signo , Training and post
migration report. The optimization and closure
phase involves, Optimization assessment,
performance monitoring reports and closure
reports.
Quality Analyst shall ensure that they follow the
phases and deliverables with no deviation. Each of
the critical deliverables shall undergo relevant
veri cation/validation activity, and the users shall
get trained to operate. The acceptance criteria
meeting and performance monitoring the Quality
Analyst to check all to ensure the migration
program is going well. The following are some key
activities and deliverables in which the QA has to
review relevant aspects addressed.
SOW/ Contract Review
Check the Migration and Any Speci c
Performance/Security targets, Third party
dependencies
Portfolio Assessment/ Catalog of Source - Review
Check if the report is shared with client and
approved. They update any related risks in risk
log/any client shared register.
Source Analysis Report (or) Cloud A nity Index &
Decision Tree
Check Dependencies details are lled in & Current
performance of system/components are
baselined. Is the report is signed o / agreed with
client
Migration Plan (& Migration RACI)
Check for the Timelines, phases and deliverables,
Acceptance & Success Criteria, To be Performance
State, Resources needs, RACI. Check for approval
of the Migration Plan. Check if any tool selection
done for migration, if so relevant factors
nalization is documented.
Migration Strategy
Check for the pattern of migration (Refactor/Re-
host/Re-platform/etc). Check the Security
responsibilities, reliability, performance needs and
Cost consideration of Target/landing architecture
is documented)
Migration List /Prioritized list (or) Migration
information form
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
Check for the updated list of migration ready
components/server/data/application
Pilot Report (As Applicable)
Check for Go/No Go Decision and the
challenges/risks & lessons learnt documented. The
lessons to be taken to migration activities.
Migration schedule
Check for the intermediate milestones and
percentage completion planned. Check for
dependencies identi ed for meeting schedules
and the risks.
Master list with con guration details and Status
(&/) Run Book
Check for the schedule, Status of Migration,
Pending issues, Runbook - Detailed steps/activities
with con gurations & checkpoints & status detail
Test Report
Check for the functional, security, and
performance test cases and the test report
Cut Over Plan
Check for the Readiness and Roll back plan.
Training Plan & Records
Training materials, training plan and training
report/Completion details to be maintained for
user/client
Post Migration Report & Acceptance
The success criteria meeting, performance
measures, recorded issues/resolution, etc. Check
for the approval.
Pre & Post Migration Technical Review Checklist
Check if the project team used the pre & post
migration checklist & any tool for evaluating the
migration. Also, the checklists are used in the
project.
Migration Metrics
Agree with project on the migration metrics and
review the data on a monthly basis/ biweekly
basis. De ned threshold and violations supported
with analysis
The checks on the above given activities and
deliverables will enable the cloud migration to
resolve the challenges quickly and to have a clear
view on work items progress. Often migration is
the process weakly addressed after an
enthusiastic start. Then based on time availability,
the depth of migration happens in the accounts,
which often impacts the quality of migration. So
it's important for us to ensure a right level of
progress happens in every front in migration. The
above given thought process can help in building
stronger connect.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
Agile @ Scale–
What does it
mean to Quality
professional?
Author–Vasanthi Veerappan
It was way back in 2010, when one of the US
banking client had started their Agile journey as
part of IT transformation program and my
organization was supporting them in this journey
as O shore software Vendors. It was my rst
association with an Agile project as a quality
analyst and to be frank; I was very apprehensive
about the way the entire process works in an agile
fashion. I was zapped by terms like sprint, poker
estimation, retrospection etc.. Everything sounded
new and I was constantly on my toes trying to
identify any problem, any alert or risks or any
slippage that will reassure me that traditional
waterfall development methods still works and IT
process world is still the same one that I am
comfortable with.
Needless to say, the project faced some delivery
issues related to quality and schedule but overall,
when a customer satisfaction survey was done 10
months later after 3 releases, it was a moment of
celebration for every stakeholder associated with
the project for the success they had experienced
through Agile life cycle.
So, at that point in time, it struck evidently that
Agile is here to stay and now almost 9 years in that
line, it's seen that Agile has stayed and now Agility
has taken up predominant importance at next
levels as well i.e., Agile @ Scale.
In Agile world, almost every aspect of governance
and implementation including quality of
deliverables is a shared responsibility of the whole
Scrum team and there is always a question on
where does the independent role like QA t in to
the whole gamut. What is required to be noted
here is that it's not just about technical code
quality of deliverables is of importance, but how
do we sustain the quality in the longer run? This
needs to be achieved using the in-built quality
practices that ensure team agility.
As a Quality professional in this era, I strongly
believe that processes hold even more importance
than it did way back then for smaller programs
and just like the Agile practices and principles are
extended to scale up to larger programs, our qa
processes also needs to be scaled up accordingly
to be integrated in the system.
There are some areas which I have learnt from the
engagements I have supported during the last few
years and I think these are some areas that every
QA professional should take in to account or take
up responsibility for when large Agile programs
are being supported by them.
Think Large
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
Always Derive Quality themes and quality
strategies for the large program at the overall
account level. Never start at the team level to
establish quality goals. This is one mistake we do
when we support large programs for quality
assurance. We start o small- trying to de ne a
governance plan, tailoring of the processes,
establishing a measurement system etc for the
rst scrum team and then when another scrum
team onboards, we repeat the same process.
Slowly, we realize that what ts one team is not
tting another. And it leads to chaos in the middle
of the program execution. So always look at the
account/program understand what the Business
Organization and technology goals are and then
derive the Quality themes and strategy such that it
gets embedded within the overall account
strategy. When the themes are set at larger
program level, it ows in as quality goals to
individual scrum teams and the focus and
importance is in built into the processes itself. Eg:
De ne KPI for UI/UX teams or test automation
teams; Establish Feedback mechanism at various
levels, etc
Design Ways of
Working across all
Agile Teams
While di erent Scaling models like SAFe, LeSS,
Nexus etc would provide guidance on what
practices needs to be de ned, how the team
needs to be structured, how the practices needs to
be implemented etc, as a quality professional we
still have a lot of work to do for setting up of Ways
of Working within the team.It starts right from
de ning and agreeing with all scrum masters,
product owners etc on what standards to follow
(eg: development framework, defect tool, testing
tool, etc) to de ning collaboration mechanisms for
various roles within the teams ( eg: between
solution architects, between Business analysts
etc).
It could also be simple things like what should be
the defect status work ow while tracking defects.
For eg: if there is a visual board which is decided
to be used by one scrum team for tracking, ensure
it gets used in a similar fashion across all teams. It
will help remove many overheads during program
level metrics tracking.
Bigger the team, they require the better
facilitation skills to bring every stakeholder to the
table and agree on the Ways of working for the
team. It doesn’t just stop there, once it's agreed, it
becomes a prime responsibility for the QA
professional to determine how it needs to be
introduced to the newly added teams, how they
are trained, and how we monitor the quality and
progress throughout.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
This is what will ensure that the team of agile
teams works together as one unit. While models
like SAFe do talk about functions like LACe(Lean
Agile centre of Excellence) which helps to set up
this, they do not get formed at the very beginning-
the very reason we need to jump in and help them
at the beginning.
Identify systemic
failures
Establishing ways of working also leads to the next
imminent step of managing its implementation.
One of the obvious ways to do this is by
monitoring and understanding the quality trends
from di erent teams. As a QA person, we are
uniquely positioned to see the complete big
picture using the parameters from all the teams
and can assess the overall system quality.
When we receive the input from all teams and its
cross function activities, we get to understand the
overall systemic problems ,identify the
bottlenecks, drill it down to speci c working
practices and be able to come up with right
recommendations of how the practices needs to
be adopted and x that gap.
Scaling with Agile
Scaling requires everyone to collaborate to make it
work. The scrum teams might very well take the
lead in establishing built-in quality through code
quality practices, testing practices etc but it’s not
enough to implement just the advocated models’
current guidance on quality practices. It requires
attention and establishing discipline at every level
and we as quality ambassadors should be ready to
relentlessly help the team in inculcating this
quality culture within.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
Auditing Cloud for
Banking Domain
Authors–Anand Patel, Archita Ghadi,
Sonal Shah
We have heard of penetration of cloud in many
sectors and keep on hearing of its ever-increasing
usage, its pros and cons.
But we rarely hear about Cloud going hand in
hand with the BFSI domain. The moment we hear
of the BFSI and Cloud combination, our anxiety
reaches its peak. Common questions that come to
mind are “Will it be safe?”, “How the hell they are
to handle Data Privacy”, “Will sensitive data be
secured” and so on and so forth.
India is the rst country in the world to have a
“Banking Community Cloud”. IDRBT is a Research
and Development Institute established by RBI.
IDRBT in alliance with C-DAC has successfully
achieved this feat. IDRBT has deployed IaaS
services in this community cloud.
This paper majorly talks about what should be an
Auditor’s focus areas and challenges in auditing
such a system. We will deal with all major
concerns in depth in this paper, beginning from
goals and taking it ahead to technical details.
Business Goals–To begin with the Auditor needs to
verify what business goals have driven this
decision? And is the implementation aligned to
these goals.
 Very common business goals would be like–
cost savings. Would there be a considerable
reduction in the infrastructure-related
capital expenditure of the banks?
 Would it help bank by removing their
dependency on IT maintenance or hiring a
highly technical person and achieve scalable
good performance?
 In-depth assessment of the driving factors
that led to the decision of agreement to
Community Cloud could be a good starting
point.
 The buy-in of the IT Steering Committee and
creation of the Cloud Policy Statement could
provide a basis for further work.
 Has there been a formal vendor viability
assessment by the bank prior to being a
party to the community cloud?
Regulatory Compliance and Needs - Banks need to
comply with multiple national and international
regulations while handling customer data. Many
banks need that the nancial details of the
customer must stay within the geographical
boundaries of the country.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
The auditor here can ask for the inventory of the
mandatory compliance needs of the bank.
Depending on the business needs, there could be
set of compliance needs common across all banks
and the rest would be case basis for the speci c
bank.
 Are all the regulatory and compliance needs
satis ed by the CSP? How can that be
veri ed?
 Would the CSP provide certi cates that
validate adherence to compliance needs?
 Extending on the same lines whether the
CSP can be audited?
 With the banks, the challenge gets more
tough. With increasing globalization and
changing nancial scenarios, certain
compliances not applicable may become
mandatory in the near future. For these
scenarios, would the CSP have the ability to
comply and provide support?
So, in this community cloud, an auditor needs to
focus on speci c compliance needs of the bank
and the controls CSP deploys for the assurance.
Reliability and Availability - In the digital age and
with features like mobile banking, availability of
the applications becomes a very critical factor.
CCID (Cloud Computing Incidents Database) has
shown Cloud outages ranging from few minutes to
48 hours, which amply shows Cloud is not immune
to outages.
In our example of IaaS for Community Banking
Cloud–IaaS delivery model would be used for
computing, storage infrastructure along with
certain services like account management,
message queue service, database service, etc.
 The auditor should verify the contract to
understand services o ered by the CSP and
its impact due to outages.
 Geographical diversity of data center
architecture and its fault tolerance.
 Availability management processes of CSP
and BCP of CSP.
 Impact of non-availability of the database on
application and transactions in process.
 What communication mechanism is agreed
between the CSP and Bank in case of such
outages.
 Impact analysis by the bank which has
helped to establish the RTO and RPO
baselines and the subsequent agreement by
CSP.
 Contingency plan developed by Bank for
outage periods.
Interoperability and Portability
In the fast-changing business landscape, they may
sometimes require it to change the CSP. There
could be multiple reasons for doing so. In these
scenarios, it makes sense to assess portability and
interoperability. Not doing so may cause a risk of
being stuck to the vendor.
From an IaaS perspective, the storage capability of
the CSP would be of highest concern.
Interoperability would not be a major issue with
IaaS because the banks would own applications
themselves. Hence, there would be no impact on
application interfaces.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
 After migration of infrastructure to a new
vendor, would the existing CSP release the
IP’s?
 After potential termination of contract,
portability of data and metadata (for e.g.
format of the output/extract from the
vendor) and purging of data by the service
provider. This data remanence poses a
higher security threat and auditor needs to
double check the mechanisms enforced by
the former CSP after release of data
regarding storage media.
 The CSP should have agreed to and evidence
clearing and the Sanitization approach used.
Auditor should refer to certi cates here
which speci cally mention Media
Sanitization like NIST (800-88) guidelines
ensuring the right compliance by CSP
 The auditor can review the CSP’s data
destruction policy, if accessible.
Security and Data Privacy - These forms the meat
of the entire audit. Data security and privacy are
core to any business having customers' sensitive
data, and banking quali es for extra scrutiny.
Whether the bank has access to a security audit
report of the CSP
We can segregate this topic in multiple areas as
below –
Physical Security
What guarantees are provided by CSP to assure
the physical level security of data centers, storage,
and network resources?
Network Security
 Is there an agreement to have access to
network level logs of the CSP?
 Agreement to investigate and collect
forensic level data?
 How are the IP’s released by the CSP? And
how are they re-assigned?
 SIEM (Security Incident and Event
Management) like Firewall, IPS and IDS of the
CSP
 CSP may collect syslog les. Has there been
a risk assessment done to understand what
all data is going in syslog les (like
authentication and authorization details).
The auditor needs to question the bank to
understand the inventory of this data.
 Regular upkeep, patching, and hardening
processes used by CSP
 If this community CSP is hosting data of
multiple banks, what preventive measures
are taken to ensure Bank A cannot
intentionally / accidentally gain access to the
database of Bank B. These are technically
achieved by logical isolation using the
hypervisor layer.
 Access controls to the hypervisors
Data Security
We can further subdivide this critical topic into
multiple arenas as below -
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
 Both data in transit and rest should be
encrypted. For data in transit auditor needs
to ensure mechanisms like use of HTTPS/TLS
(with forward secrecy), IPSEC and SSH are
employed.
 Steps taken by the bank for safety of the
encryption keys
 Does the CSP provide a clear backup and
data archival policy, which gives assurance
of data recovery in the event of an
unfortunate incident.
 Data classi cation to identify what sensitive
data resides in the cloud and what controls
it applies to an accidental deletion of data
including archived data?
 Recommended certi cations from CSP are
(not limited to) ISO 27001, PCI-DSS & PA-DSS.
Additionally, IDRBT recommends Cloud
Security Framework, SOC1 and SOC2.
Auditor can verify these certi cations of CSP
 Auditor needs to ensure the mechanism
used to protect data in transit by verifying
use of HTTPS/TLS (with forward secrecy),
IPSEC and SSH
Data Privacy - Privacy is accountability to collect,
process, disclose, store and destroy data that
could help in identifying an individual. There is no
speci c consensus on what it means to be private
data. You might have seen the irony many times in
banks–where Aadhar card copies are just lying on
the desk.
This lenient approach is a strong “NO” for privacy
in the cloud.
KPMG has a de ned data life cycle as –
 Auditor can review the SLA’s set for privacy
of data in contract
 Is there any penalty clause associated if
privacy is breached.
 With increasing awareness of data privacy
and discussions on it in Parliament, there
could be further enforcement of regulations.
This could have an impact like CSP’s could
be termed as una liated parties and data
privacy regulations would be more stringent.
Should such a scenario arise auditor can
verify the competency of the CSP to align
with the regulations.
Data Loss–Events are beyond human control like
oods, earthquakes could be a potential cause to
data loss along with human or technical errors.
 Is there an agreeable policy in place to
recover the data? They can achieve this if
the CSP has a concurrent data storage
facility.
 The auditor can also demand evidence of
proactive testing records by Bank and CSP–
for data loss scenarios. This would provide
enough assurance of data retrieval, should
an event occur.
Must have terms in the contract - Considering the
criticality of the operations and the catastrophic
impact of failure puts the auditor in the critical
situation to identify and ensure measures taken.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
 Reviewing the GRC of CSP. If possible,
reports on risk assessment, controls and
monitoring them should be presented to the
bank by the CSP periodically.
 Reviewing whether Bank has done analysis
on forensic data it needs to collect and
whether it is agreed with CSP along with the
capture process. This is crucial from the
legal aspect.
Termination and Exit Clauses–Auditor needs to
review the contract to understand agreement
between the Bank and CSP in case of termination
and closure. Such an event could occur in multiple
cases like CSP closing operations, dispute with
CSP, transferring operations to another competitor
CSP.
 No image or data is withheld by CSP and use
this as a bargain.
 Clear and well-established policies are
de ned and agreed should such a scenario
arise.
 Legal implications and penalty clauses in the
contract for misuse of residual data by CSP
The above given thoughts are a way of making
compliance practices stronger and derive
meaningful outcome from the Audits performed in
cloud.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
Measuring Robotic
Process
Automation
Productivity
Ecosystem of Human
bots
Often Companies are interested in knowing what
is productivity when human and Bots Co-work
together. Shall we consider the bots as one human
or shall we consider them as an equivalent of
many humans as they work longer than human
working time? And some disagree and say the bot
is faster than human so is that considered as
super human equivalent. There is a genuine
interest in de ning the measure of productivity in
the Robotic Process Automation (RPA) context. If
we try to understand more carefully the need is
not exactly to know how to I convert the bots to
human, but to derive e ectiveness or return on
Investment in the RPA exercise itself.
Larger Context of
Measurements
Before we do the deep dive in to RPA productivity,
let us understand how the evolution of
measurement varies over time with maturity. This
will set the context on to what extent we need to
address the challenge. Every new technology
upgrade, which can signi cantly impact the
delivery of results are concentrated rst on
Coverage across, then as more areas already
started implementing we focus on Productiveness
measures and control measures, later move to
Prediction and Improvement based measures,
then we move to Innovation or transformation
measures.
This is like a cycle in every time a new technology
or delivery model raises. So today the need is
more to understand how much we apply RPA in
our projects and is there a way we can say ROI is
high as we expected. The resultant question to
support this is, What is the productivity of RPA and
how we can measure it. Th e question of a bot is
how many people is not exactly what the
companies want to do, but to know when they will
get ROI and how the RPA is helping in results.
Movement to more detail productivity measures is
not far away.
Comparative
Productivity Gain
The context of projects where we apply RPA varies,
and it’s tough to have like-to-like comparison. The
simpler way to handle this problem is to compare
the e ort spend to do certain tasks with the known
level of quality and then after implementation of
RPA what is the level of e ort spent to do the same
task with same scope and to the same level of
Quality.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
This would ensure the boundary of operation
remains the same and the parameters are also
remained the same, as we measure the e ort
impact.
In this model, we measure the outcome produced
and e ort spent together.
We can use the unit of e ort spent towards
outcome for productivity. However, the outcome
itself can have a multi-dimension in such cases we
may or may not use a composite weightage. The
below given picture depicts a case of Application
maintenance service, how the E ort computation
model with outcome can be used.
In this outcome and e ort based productivity
approach, we can start measuring them after 3
months of operation. We can either apply a by the
cost of human to account the bot, as few works by
junior resource the bot has taken over or with
another method of one license is standard 9
hours. Alternatively, if a bot runs for 24 hrs
without interruption, then we may take 3 person
equivalents. Under any case, we are trying to
compare the outcome produced by the unit of
e ort , before RPA and after RPA.
In a better way, we can go for cost reduction by
introducing RPA services. In this model, it did the
work before the RPA period for a scope with a
certain cost and after RPA it reduces the cost. The
cost computation here will consider all the cost
involved in Initial and Operational cost of the bot.
This costing will also help in knowing when is the
cut-o period by which the RPA investment will
start giving bene ts. This method is simpler and
provides better clarity in results. However, what if ,
the RPA focus itself is not “Cost Reduction” but to
improve compliance , Improve service Quality,
Improve Turn Around Time, in such cases these
changes has to be converted as Cost gure to add
to saving.
In RPA, everyone agrees that it pays bene t, but
it's di cult to evaluate exactly to what extent it
pays back. This happens because not the entire
service is getting applied with RPA , but often a
portion of the work. Hence data collection itself
has many challenges, hence the above given
methods can help us focus macro number to get
the right feelers.
Productivity in Upcoming days
The next phase of measuring productivity can
touch on how the licenses are utilized, how do we
improve the processes rst to an optimal stage
and then apply RPA. We would move to speed up
and reduce the RPA scripting process and start
measuring the development practices too.
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
RPA itself will have many improvement measures
to aid in building productivity through RPA. The
Simpler way for now is to concentrate on cost for
similar quality in a scope , before and after RPA.
Based on which we can increase the coverage for
RPA across platforms.
My Corner!
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
Editor Page
Who moved my jobs! Who made my toolbox un t!
Where is my business! The questions have
emerged in the minds of Quality Analyst more
often than ever. The landscape of information
technology projects is changing, so does the
business delivery model. Reinvent and Re-
energize Quality concepts to the new Digital
World! This is the automation era of the
technology industry, where codes are automated,
and so does delivery environment is! There is
nothing wrong in the old techniques of Quality
Assurance which we used successfully, just the
context is changed and we no longer deal only
with the human to control variations, but with bots
and Tools. The environments have become
simpli ed and have many controls in-built within
the service delivery platform. We are slowly
moving from prevent failure with more time to
detect failure quickly.
Speed of delivery and collaborative working models
achieved high priority, and they aid the same with
advanced tools. The quality de nition and
techniques should understand the priorities and be
integrated with tools. The days are not the long
where assurance part is automated, and it
automates Audits. This is the time for the experts in
quality to share and develop the new focus and
concepts to build the business with support of tools.
The evolution of cloud, data speed, tools,
collaborative methods, and lean applications have
changed the delivery models, and it indeed made
many of our previous challenges void. So our focus
is to get aligned with newer delivery models and
technologies to meet client expectations with the
Digitized Delivery Platforms.
Let's start a new journey of collaborative progress
with everyone in the Quality Analyst and
compliance eld to refocus our energy in building
new concepts, sharing and de ning the best for
future. This platform is for everyone to express
their views and thoughts of future openly without
bound by the legacy.
Come, let’s join our thoughts, let’s refocus and
rede ne concepts for Digital Quality!
Connect and share your views and articles here
‘Contact@digitq.in’
DigitQ.in
Q!Digitz                                         Vol 1            Aug 2019
Content
Disclaimer
The content expressed in this magazine is the
thought process of individuals, and we are not
responsible for validating or ltering the contents.
We do not ask for references, as the new ideas
and innovative thoughts need not evolve from
past. We encourage people to share information
that can create the spark in the reader's mind and
apply in their own context. We leave the
judgement over the content to the reader. If any
plagiarism is present, then the author is
responsible, and we don’t own the content.
Similarly, the concepts presented in this magazine
are assumed to be open for the public to analyze
and apply in their own cases. We encourage this
magazine as an open platform to share new
thoughts, concepts, results, and good practices
that can lead to digitizing quality to t the new
digital era.
Articles are
Welcome
We welcome articles over new ideas, concepts,
pilots, analysis and researches on Quality to shape
up the focus to today’s Digital IT World. You can
share your thoughts with us in complete article
and use this platform to communicate thousands
of IT professionals and build a stronger new
Quality Community. To digitize the new world of
Quality you can send your article to us at
‘Contact@digitq.in’
Points to care: Use smaller paragraphs, own
images or copyright free images and limit the
articles within 3 or 4 pages. Avoid marketing and
objectionable languages. Also by sending the
articles to us, you are agreeing for publishing to
public media for free usage.
Edited by Vishnu Varthanan Moorthy
DigitQ.in

Q!Digitz

  • 1.
  • 2.
    !Digitz                                        Vol 1            Aug 2019 Assuring DevOps Delivers Well! DevOps Excitement The moment the name DevOps we here, there is an excitement in our minds, and we are certain another personal certi cation getting added in our career. After overcoming the initial days of excitement, I started to learn DevOps in the way they teach to the dev & ops team. Within a few hours it pushed me into the land of codes and mountains of tools and plug-ins. Automation and lots of automation and code move from one state to another state based on validation conditions met and nally it reaches to staging or production environment. This is it, Automating the pipeline of development and release cycle to the production environment with many tools, so no one manually intervenes or delays the ow. When I understood this way, the rst reaction is the role of a quality analyst? Does this mean we are losing our jobs with DevOps? What can I contribute in the world of automation and tool pipelines? There were many questions. The answer is, ‘as long as human is involved, variations and contexts involved, activities and outcomes involved, there is a role for Quality Analyst to perform constructively,’ DevOps Focus DevOps implementation in Engagements has several key driving factors. However, few of the important drivers are,  Purpose  Culture  Automation & Environment  Architecture In this article, we will focus on the role of Quality Analyst in a DevOps engagement. The journey starts either from scratch or transformation from an existing delivery model. The Quality Analyst can play an important role in Transformation and in run mode of the DevOps engagements. Often, engagements go for DevOps, because it's sold futuristically to client stating DevOps is the future and we have it for you now. With the reality of delivery, they are not sure why we need DevOps. How many times continuous delivery or continuous deployment done in a week and do we really need it, is a question arises at a later stage. Next comes culture, where the team is not ready for shorter cycles and never worked using collaborative and automated tools, then there are challenges on the rise. The simpler way is to make them use to Agile Scrum kinds of short cycles where requirements collection to releases they get used to. Automation is the next driver which often people misunderstand that once automated, the pipeline will remain the same. Q DigitQ.in
  • 3.
    Q!Digitz                                        Vol 1            Aug 2019 It depends on how compatible each of the software packages is and how they get upgraded and what trainings we provide to people. Also, there is someone to monitor the pipeline runs without issues. Next comes Architecture with Micro-service architecture, dockers, shipping ,etc where the architectural team has to take complete advantage of the model and the environment they use. These drivers also help a quality analyst know , each part has a role, failure points, needed improvements, monitoring performances, veri cation and validation needs. DevOps Work ow There are multiple context exist when we talk about DevOps, so here we will take the case of DevOps engagement using Agile Practices and Cloud Environment for development and delivery. The Engagement has to ensure it has the environment as per its expectations achieved and they sign Performance SLA with cloud providers. The licenses needed for the tools are managed by cloud provider or any of existing licenses will be used from organization has to be decided. We design the work ow architecture with the number of user, server required, tools con guration to enable the pipeline set up. Automating the work ow of development and delivery doesn’t mean the requirement, design and coding will get automated. The human e ort, prioritization and application of skill to develop product/application features to be paid adequate attention. Application of the Agile method helps stories/smaller functionalities, simpli ed development cycle and supports continuous integration. Agile Methodologies (Kanban, SAFe, etc can well manage the upstream part of development). This improves the application of DevOps automation , as periodic delivery and deployment is possible with automated pipelines. A Quality Analyst can spend adequate time in ensuring the agile practices or suitable shorter cycle time methods followed, resources available, pipelines are checked, licenses are available, Automated veri cation tools (code quality), continuous integration & delivery tools with reporting abilities are maintained in the engagement. Further to make the system of DevOps Succeed the stakeholders and business analyst shall provide features/functionality needs on time to a development team. The regression test cases release schedules to be maintained for the context. In addition, usage of cloud should control security practices to ensure authentication & it maintains authorization practices. Continuous monitoring of the deployed code and addressing of the incidents can be handled as per typical IT service management practices. Continuous improvement plays an important role in reducing the waste and minimizing the failures at any stage. In one case, we saw there were too many quality gates and approvals, which delayed & denied the faster deployment bene t itself. DigitQ.in
  • 4.
    Q!Digitz                                        Vol 1            Aug 2019 Quality Assurance in DevOps As Quality Analyst what do we need to ensure in DevOps projects are Team Culture & Readiness, Process Flow & Criteria, Con guration of Scripts, Veri cation and Validation practices of pipeline & Product/application and KPI based Analysis and improvements. There are organizations that have developed ‘DevOps Maturity’ assessments and scoring models. It's de nitely a good practice to do the maturity assessments, as it give the benchmark for improvements. The following are key elements we need to check as Quality Analyst in DevOps engagements. To ensure that the DevOps work ow is clean and adequate clarity is there, then we need to have  De ned Work ow  Pipeline con guration blueprint/architecture  Work instructions of pipeline  Training material/Guideline Veri cation & Validation in Development practices shall have the following:  Validated User Story/  Defects log  DOD criteria  Automated Code Review Report & Action  Unit Test report & action  Test cases review & Versioning  Traceability of user story vs automated test case version  Build failure log  Deployment Roll back log  Test failure report Change management will play an important role, as the context variation and need variation will evolve; hence the following shall we shall check hence the following  Environment Change  Pipeline ow change  Approval for changes  Deployment of changes  Impact analysis of change We shall check continuous delivery practices for the following,  Approval process for con guration  Code / Scripts Review  Code / Scripts con guration  Access management of tools/environment for team  Policies for Dev /Test/Ops group We shall perform security Con guration checks on the following:  Access to Tools–log  Security key storage  Licenses management  SLA for cloud( as Applicable) Monitoring of the pipeline and health of the pipeline to be checked with the following:  Application Performance Report  Automated Health Check Report of application  Roll back failure analysis Besides the above given checks, it's important that we have adequate measures as an indicator in DevOps engagements. DigitQ.in
  • 5.
    Q!Digitz                                        Vol 1            Aug 2019  Lead Time to Requirement to Deploy  Deployment Rate per day or week  Build failure in a period  Deployment Roll back & others DevOps thinking for QA DevOps is neither a magic wand nor it’s a complete Automated Development Platform on today’s context. There are still human elements, Synchronization elements, Tool Con gurations of process activities, which needs attention from Quality Analyst. One small error in con guration can create chain events until production and can create business impact even more than once. Hence, it's important that all aspects are validated well before it launches the platform for development. Similarly, the validation practices would have dynamic features testing with aids; Hence, there needs human intervention and maintenance of con gurations. In every context there needs to high traceability maintained and sometimes the domain would also want the logs of every activity with relevant approvals. This needs attention from Quality Analyst. DevOps in a redesign’s quality assurance, as the skills to know technology like the cloud, the method like agile and the set of tools in CI/CD before we jump in and focus on Speed of Delivery, Security of practices, Failures based Improvements. DigitQ.in
  • 6.
    Q!Digitz                                        Vol 1            Aug 2019 Quality is Retaining Clients Business Value of Quality The contractual terms and delivery models have drastically changed with technology upgrades. The challenges faced by Quality Assurance few years before is no longer seems to be critical. The involvement of the client in delivery has brought in better functionality delivery and control over milestones. Quality Assurance in many organizations focused as multi-point weighted activity, where we try to cover each and everything. However, the critical focus areas have loosened up, as the defect or schedule slippage is not the concern, but few others are. So, we have Quality Assurance which has no focus, but delivers compliance score. In some places, it is a just task by task observations and misses the point. So where is the Business Value Addition from Quality? Can quality assurance have a critical focus and show how it's important? We will discuss here. Strategizing Focus in Project Quality Assurance in a project to have a clear focus to enable the project team to succeed in their work. Let the delivery model be any there can be two ways in which we can look at quality assurance, a) circle of risk and b) circle of improvement. Often we believe the project is stable, so assurance team has nothing to do or the project is in re ghting so they don’t want assurance person to visit them. Both scenarios show how missed the chance to in uence and deliver better. Most assurance teams get confused in setting up a clear focus and how to build on to it. To make the case simple, we will assume the clients give us 3 or 5 year contract of digitizing their application portfolio and mange the existing Infra and applications. The focus for the assurance team should be progressive quality assurance to enable the project to develop abilities and perform to the context and build maturity. The result would be obvious; the project shall be able to renew the contract successfully with the client irrespective of market competition. The progressive journey of assurance is not an accident, but a strategy to build focus and abilities to perform in the next level. The Assurance roadmap shall enable stabilizing the account and converting the account to capable. From there lead towards contract renewal. DigitQ.in
  • 7.
    Q!Digitz                                        Vol 1            Aug 2019 In the progress of unstable account to stable account, the application of “Circle of Risk” as a focus is critical to address the challenges and make compliant and productive outcomes. In the progress of Stable account to Capable account, the application of “Circle of Improvement” is critical to address the opportunities and excel the performance. Circle of Risk and Circle of Improvements The parameters of projects don’t change for a life cycle; however, the context changes and environment changes. So let's take anyone types of life cycle like development projects. There are various parameters and components we as assurance team monitor and support. However, the support is valuable only when we associate a purpose and show the impact it can make. The Circle of Risk focuses on identifying failure points and arrest it. The project shall be able to meet its expectations and wherever it can fail and whatever can make it fail, are proactively analyzed and associated risks are highlighted. The risk- based focus is very important in building stability in the project. We focus circle of Improvement on improving the performance to move from the stable state to a capable state , where client expectations are ful lled e ectively and e ciently. The circle of improvement is about using opportunities and being foresighted to achieve the results and getting ready for renewing the contract. The faster we achieve stability by applying the circle of risk, the more cycle time is available to improve the performance with the circle of improvements. This would ensure that the client is more delighted and we as the service provider has more understanding about the systems and can develop better contractual negotiations. The Circle of Risk focus would be with the following areas,  Resource Skill & Availability  Dependencies & Timelines  Standards & Procedure  Tools Application  Security Awareness & Adoption  Requirement Clarity  Validation Methods  Customer Collaboration  Compliance, Security & Regulations  Reports and Actions  Problem Analysis  Governance and structure The initial few months to year, the project has to ensure they are on top of building stability. The circle of improvement focus would be with the following areas, DigitQ.in
  • 8.
    Q!Digitz                                        Vol 1            Aug 2019  Industrialization  Automation  Knowledge Database & Reusable Components  Tools Enhancement  Technology upgrade  Innovation  Process changes & Upgrade  Client intimacy  Continual Improvement  Simpli cation & Integration  Cost Saving The more year on year savings and make the project ready for contract renewals are done with the above given practices. Contributing in contract renewal This is not about communicating what certi cation in assurance the organization has to a client. It's about building on to the “Circle of Improvement” to focus on competitive and unique solutions for the contract. The assurance team can play a vital role in the contract's journey renewal process. Taking ahead from Circle of Improvement , the last 6 to 8 months ahead of contract renewal time, it's important that we start baselining the project parameters. These data are vital for the client and for the organization to know and take any calculated risk in the delivery and pricing model. Every organization undergoes many changes in a few years of the contract period of the project; hence, there can be many new improvisers or changes available to the project team. These has to taken in to consideration as improvisers. Ideally, 5 to 20% cost reduction of a similar scope is possible with the improvisers and people's knowledge gain. The assurance team not only baseline and supports in the binding of improvisers from multiple corners, they also can take part in developing process architecture, building request for proposal components and involve in due diligence participation. The assurance team can review the delivery model for successful delivery. Roadmap to Renewal with Assurance Contract Renewal is not the last 2 months performance based activity, but it’s the outcome of client realization of service/product of quality for the given cost and meets their business need. The project has to cross the hurdles of instability and reach stable delivery and then reach performing state. The Assurance team has to ensure these transformation happens and the goal they want to achieve is , to enable renewing the contract. Its easy to loose the path or relax by delivering in the same manner, however this wont make the organization has the undeniable leader to get the contract renewed. The Circle of Improvement and combined Contract renewal focus can get enormous success. DigitQ.in
  • 9.
    Q!Digitz                                        Vol 1            Aug 2019 Cooking an Arti cial Intelligence (A.I) Recipe for Client Delight There is a huge amount of client satisfaction data available in major companies. It even spreads across years. One of the important expectations from any quality analyst is to oversee how the product or service is realized by the client. And what opinions do they carry about the service provider? In many organizations, the client satisfaction score is still a surprise result, and then re ghting or appreciation chain starts. They store the data for overall baselining of how many satis ed clients they have. However, rarely we apply can we predict what the client will feel and rate our product/service based on existing data. Hold on, every client is unique and every point of contact of the client can be di erent; This is an invalid argument. I too agree with the view, they are unique. However, among the uniqueness also there can also be a pattern and common likings and appreciations.    Let's take few common examples, across globe people have di erent behavior, culture, likings, values but still there are books which sell across world, there are services and Food sold across, there are movies which are making maximum revenue irrespective of regions. This is possible, as we like few abilities and characteristics across the regions. Does this mean the globally successful products are successful in every region at the same level, and are they better successful than regional products? The answer is No. The regional variation, culture variation, likings all still has in uence but still there are something common always. This is the reason I feel A. I recipe is better suited for making the service for client delight. I believe the one who reads has a basic understanding of the Connectivity between A. I vs Machine Learning vs Deep Learning. In case if you are not, then go through the youtube video https://youtu.be/WSbgixdC9g8. When a project starts with a client, by knowing the key drivers, we should be able to tell if we will achieve client delight or we will fall short of it. This would help us follow the drivers and take actions that can better o in handling it.    As our intention is to know the drivers and result achievement, going ahead with deep learning is not a choice. DigitQ.in
  • 10.
    Q!Digitz                                        Vol 1            Aug 2019 Instead, we can apply machine learning technique like Decision Tree. As said earlier, we believe we have a reasonable amount of data for us to construct the decision tree. The data shall have relevant characteristics (Drivers) like Sector (private vs public), Domain (Healthcare, aerospace, etc), Type of service (Application maintenance, Product development, etc), Region (countries or states), Technology (Digital, cloud, big data, mainframe,  .Net, etc), year of contract (1st year, 2nd year, etc), Type of contract (Fixed price, Time and Material, etc), Method (Agile, DevOps, Incremental, etc) and many more relevant data. It's always certain there will be few who will look for a pattern in every organization, however pattern in a condition of other variables are di cult for simple visual inspection. It needs better application models like a decision tree to provide insights and give results quickly. To know more on the decision tree, watch the youtube video https://youtu.be/DCZ3tsQIoGU. We can use the existing data in the organization for training the decision tree and for this we might split 2 parts for training and 1 part of data for testing. We can use the Scikit learning library with Python for supervised and unsupervised learning of data.    The decision tree was established and visualized with the decision branches. The nodes from which the branches starts are the drivers which we need to watch out for and their values which leads to client dissatisfaction shall be controlled or actions to be taken to balance it out. For example, we might get the result like in 15 percent of client dissatisfaction the key combination of drivers were, Private Sector > aerospace> product development> Germany> Cloud Technology> Time & Material > Incremental models. In this when the incremental model changed to Agile, the value is much lesser. Such insights about client satisfaction are gold for any quality or delivery person to work towards building a better recipe for developing software with the client.    The strength of these machine learning models is that it can read a volume of data and correct the learning to give better results.   The application of Decision Tree is only an example, like that they are many algorithms exists which are better or comparative.    The reason we are talking about it here is, that we as a quality analyst shall not just baseline client satisfaction and leave it there. We can predict the behavior and we can nd the in uencing characteristics which makes the good recipe for client delight. Let's explore A.I for application in Quality Engine. DigitQ.in
  • 11.
    Q!Digitz                                        Vol 1            Aug 2019 Agile Culture adoption and Client contentment Author – Aarti Patil Agile has become default choice for development, simply because it delivers more success rate than other delivery models. The 2018 chaos report (Standish Group) classi ed projects with agile under 42% succeeds, 50% challenged and only 8% failed. This is lucrative o er for everyone to start going agile way. The core of agile success revolves around Agile culture and it’s very robust but at the same time impenetrable to implement without well-built senior management support. Few important basic factors which can improve probability of success while implementing are strong reason for mindset shift, clear visibility on new ways of working, upskilling on process and tools , motivation, compensation and reward programs. Business readiness is crucial where IT organization needs to demonstrate how agile development will drive value with pros and cons to business rather than just selling agile. Business speci c training programs and workshops on agile methodology can facilitate to follow agile diligently. This will make enormous transformation in the way IT and business interacts, which will fetch quality deliverables to business with speed to market. Building the right skills on emerging technologies with combination to team restructuring will bring in more awareness about agile culture at organization level. Equalization of agile culture and client satisfaction is the toughest challenge with respect to reality. Organizations are o ering agile development solutions to business, but agile is actually executed in agile way? Is it steadily becoming hybrid? Or eventually moving back to waterfall? Timeboxed deliveries or stand up meetings alone don’t make up as agile culture, in fact they create false con dence. End to end agile delivery is hard to adopt. Today, IT organizations and other industries are acquiring the business to deliver software faster and meets market demands, same time it is vital for both vendor and client to understand agile methodology and various frameworks used within to execute the projects. Client satisfaction is critical for long term business association. DigitQ.in
  • 12.
    Q!Digitz                                        Vol 1            Aug 2019 IT Organizations set focus on agile requirements gathering in form of user stories which should be clear with supporting artifacts, align with business expectation, estimable, testable and utmost important is infrastructure need, well de ned environment setup for quick transition which sometimes is major reason of delayed delivery to business further it a ects cycle time and cost. User stories should be independent as much as possible. Handling client expectation in such scenarios becomes di cult, so thorough understanding of client business, expectation, industry background and market trend should be prioritize by involving experts, which will make the project execution and transition smoother. This will also ensure and open new opportunities and areas in business by respecting client contentment in future. Agile culture is not independent of client contentment, but it’s the way we collaborate and adapt to achieve client expectations through agile practices. DigitQ.in
  • 13.
    Q!Digitz                                        Vol 1            Aug 2019 Managing E ective Cloud Migration with QA Validations   Cloud will cover the IT world soon, with 60% global market growth in next 3 years and 80% organization to move to cloud by 2025(Reports from Gartner and computer world UK). Many organizations have principally embraced the cloud, but often there is one challenge ‘The Detail’. Organizations choose cloud for multiple reasons including data center reduction, increase global presence, need of processing power, cost bene t, etc. As they move from principal agreement to the evaluation of public or hybrid cloud service providers and services they need, they have to get in to detail. The details include the various applications, servers, dependent con gurations, criticality, compliance, users, etc. Getting an understanding of their own IT systems and their criticality takes adequate time. Then they need to prioritize what applications or servers have to be moved to cloud rst. Which also involves who is the cloud service provider and what kind of migration support they will provide. Most organizations select the less critical applications that often need changes (ex: Websites, etc) and then get comfort towards migrating other applications or servers. Where many organizations hesitate and continue with their on-premise IT System is also when they know their landscape of IT well and then not having much clarity on Migration and what kind of outcome they can see with less pain. Cloud service providers understand the problems and expectations of the organizations. Most of the top service providers have come out with their own migration life-cycle with phases and deliverables list. They start from an initial assessment to the tool which can simplify the migration activities. These are important to boost the IT team’s con dence in the Business Organizations. However, migration is not a simple process even with current level of capabilities, hence a stronger life-cycle based phased approach can enable smoother migration. DigitQ.in
  • 14.
    Q!Digitz                                        Vol 1            Aug 2019 As a IT organization most of them involved are involved as Service Providers who help Business Organization to take service from Cloud Service Providers and migrate the on-premise application/servers, etc to Public or Hybrid or private cloud. The image given here is a generic approach of the existing cloud service provider’s migration model. The initiation phase involves understanding the needs and establishing agreements between all parties. The Discovery and Assessment phase involves, Portfolio Assessment, dependencies and CloudFit assessment then migration items pipeline. The planning or design phase involves, Migration plan with acceptance, a migration strategy and landing zone architecture, a training plan and a pilot go/no go. The migration phase involves setup infrastructure instance migration ready, migrate, rightsizing of service. Here it can refactor or re- platform or re-host or repurchase. The Integration/Validation phase involves Integration of IT, Cut Over, UAT Signo , Training and post migration report. The optimization and closure phase involves, Optimization assessment, performance monitoring reports and closure reports. Quality Analyst shall ensure that they follow the phases and deliverables with no deviation. Each of the critical deliverables shall undergo relevant veri cation/validation activity, and the users shall get trained to operate. The acceptance criteria meeting and performance monitoring the Quality Analyst to check all to ensure the migration program is going well. The following are some key activities and deliverables in which the QA has to review relevant aspects addressed. SOW/ Contract Review Check the Migration and Any Speci c Performance/Security targets, Third party dependencies Portfolio Assessment/ Catalog of Source - Review Check if the report is shared with client and approved. They update any related risks in risk log/any client shared register. Source Analysis Report (or) Cloud A nity Index & Decision Tree Check Dependencies details are lled in & Current performance of system/components are baselined. Is the report is signed o / agreed with client Migration Plan (& Migration RACI) Check for the Timelines, phases and deliverables, Acceptance & Success Criteria, To be Performance State, Resources needs, RACI. Check for approval of the Migration Plan. Check if any tool selection done for migration, if so relevant factors nalization is documented. Migration Strategy Check for the pattern of migration (Refactor/Re- host/Re-platform/etc). Check the Security responsibilities, reliability, performance needs and Cost consideration of Target/landing architecture is documented) Migration List /Prioritized list (or) Migration information form DigitQ.in
  • 15.
    Q!Digitz                                        Vol 1            Aug 2019 Check for the updated list of migration ready components/server/data/application Pilot Report (As Applicable) Check for Go/No Go Decision and the challenges/risks & lessons learnt documented. The lessons to be taken to migration activities. Migration schedule Check for the intermediate milestones and percentage completion planned. Check for dependencies identi ed for meeting schedules and the risks. Master list with con guration details and Status (&/) Run Book Check for the schedule, Status of Migration, Pending issues, Runbook - Detailed steps/activities with con gurations & checkpoints & status detail Test Report Check for the functional, security, and performance test cases and the test report Cut Over Plan Check for the Readiness and Roll back plan. Training Plan & Records Training materials, training plan and training report/Completion details to be maintained for user/client Post Migration Report & Acceptance The success criteria meeting, performance measures, recorded issues/resolution, etc. Check for the approval. Pre & Post Migration Technical Review Checklist Check if the project team used the pre & post migration checklist & any tool for evaluating the migration. Also, the checklists are used in the project. Migration Metrics Agree with project on the migration metrics and review the data on a monthly basis/ biweekly basis. De ned threshold and violations supported with analysis The checks on the above given activities and deliverables will enable the cloud migration to resolve the challenges quickly and to have a clear view on work items progress. Often migration is the process weakly addressed after an enthusiastic start. Then based on time availability, the depth of migration happens in the accounts, which often impacts the quality of migration. So it's important for us to ensure a right level of progress happens in every front in migration. The above given thought process can help in building stronger connect. DigitQ.in
  • 16.
    Q!Digitz                                        Vol 1            Aug 2019 Agile @ Scale– What does it mean to Quality professional? Author–Vasanthi Veerappan It was way back in 2010, when one of the US banking client had started their Agile journey as part of IT transformation program and my organization was supporting them in this journey as O shore software Vendors. It was my rst association with an Agile project as a quality analyst and to be frank; I was very apprehensive about the way the entire process works in an agile fashion. I was zapped by terms like sprint, poker estimation, retrospection etc.. Everything sounded new and I was constantly on my toes trying to identify any problem, any alert or risks or any slippage that will reassure me that traditional waterfall development methods still works and IT process world is still the same one that I am comfortable with. Needless to say, the project faced some delivery issues related to quality and schedule but overall, when a customer satisfaction survey was done 10 months later after 3 releases, it was a moment of celebration for every stakeholder associated with the project for the success they had experienced through Agile life cycle. So, at that point in time, it struck evidently that Agile is here to stay and now almost 9 years in that line, it's seen that Agile has stayed and now Agility has taken up predominant importance at next levels as well i.e., Agile @ Scale. In Agile world, almost every aspect of governance and implementation including quality of deliverables is a shared responsibility of the whole Scrum team and there is always a question on where does the independent role like QA t in to the whole gamut. What is required to be noted here is that it's not just about technical code quality of deliverables is of importance, but how do we sustain the quality in the longer run? This needs to be achieved using the in-built quality practices that ensure team agility. As a Quality professional in this era, I strongly believe that processes hold even more importance than it did way back then for smaller programs and just like the Agile practices and principles are extended to scale up to larger programs, our qa processes also needs to be scaled up accordingly to be integrated in the system. There are some areas which I have learnt from the engagements I have supported during the last few years and I think these are some areas that every QA professional should take in to account or take up responsibility for when large Agile programs are being supported by them. Think Large DigitQ.in
  • 17.
    Q!Digitz                                        Vol 1            Aug 2019 Always Derive Quality themes and quality strategies for the large program at the overall account level. Never start at the team level to establish quality goals. This is one mistake we do when we support large programs for quality assurance. We start o small- trying to de ne a governance plan, tailoring of the processes, establishing a measurement system etc for the rst scrum team and then when another scrum team onboards, we repeat the same process. Slowly, we realize that what ts one team is not tting another. And it leads to chaos in the middle of the program execution. So always look at the account/program understand what the Business Organization and technology goals are and then derive the Quality themes and strategy such that it gets embedded within the overall account strategy. When the themes are set at larger program level, it ows in as quality goals to individual scrum teams and the focus and importance is in built into the processes itself. Eg: De ne KPI for UI/UX teams or test automation teams; Establish Feedback mechanism at various levels, etc Design Ways of Working across all Agile Teams While di erent Scaling models like SAFe, LeSS, Nexus etc would provide guidance on what practices needs to be de ned, how the team needs to be structured, how the practices needs to be implemented etc, as a quality professional we still have a lot of work to do for setting up of Ways of Working within the team.It starts right from de ning and agreeing with all scrum masters, product owners etc on what standards to follow (eg: development framework, defect tool, testing tool, etc) to de ning collaboration mechanisms for various roles within the teams ( eg: between solution architects, between Business analysts etc). It could also be simple things like what should be the defect status work ow while tracking defects. For eg: if there is a visual board which is decided to be used by one scrum team for tracking, ensure it gets used in a similar fashion across all teams. It will help remove many overheads during program level metrics tracking. Bigger the team, they require the better facilitation skills to bring every stakeholder to the table and agree on the Ways of working for the team. It doesn’t just stop there, once it's agreed, it becomes a prime responsibility for the QA professional to determine how it needs to be introduced to the newly added teams, how they are trained, and how we monitor the quality and progress throughout. DigitQ.in
  • 18.
    Q!Digitz                                        Vol 1            Aug 2019 This is what will ensure that the team of agile teams works together as one unit. While models like SAFe do talk about functions like LACe(Lean Agile centre of Excellence) which helps to set up this, they do not get formed at the very beginning- the very reason we need to jump in and help them at the beginning. Identify systemic failures Establishing ways of working also leads to the next imminent step of managing its implementation. One of the obvious ways to do this is by monitoring and understanding the quality trends from di erent teams. As a QA person, we are uniquely positioned to see the complete big picture using the parameters from all the teams and can assess the overall system quality. When we receive the input from all teams and its cross function activities, we get to understand the overall systemic problems ,identify the bottlenecks, drill it down to speci c working practices and be able to come up with right recommendations of how the practices needs to be adopted and x that gap. Scaling with Agile Scaling requires everyone to collaborate to make it work. The scrum teams might very well take the lead in establishing built-in quality through code quality practices, testing practices etc but it’s not enough to implement just the advocated models’ current guidance on quality practices. It requires attention and establishing discipline at every level and we as quality ambassadors should be ready to relentlessly help the team in inculcating this quality culture within. DigitQ.in
  • 19.
    Q!Digitz                                        Vol 1            Aug 2019 Auditing Cloud for Banking Domain Authors–Anand Patel, Archita Ghadi, Sonal Shah We have heard of penetration of cloud in many sectors and keep on hearing of its ever-increasing usage, its pros and cons. But we rarely hear about Cloud going hand in hand with the BFSI domain. The moment we hear of the BFSI and Cloud combination, our anxiety reaches its peak. Common questions that come to mind are “Will it be safe?”, “How the hell they are to handle Data Privacy”, “Will sensitive data be secured” and so on and so forth. India is the rst country in the world to have a “Banking Community Cloud”. IDRBT is a Research and Development Institute established by RBI. IDRBT in alliance with C-DAC has successfully achieved this feat. IDRBT has deployed IaaS services in this community cloud. This paper majorly talks about what should be an Auditor’s focus areas and challenges in auditing such a system. We will deal with all major concerns in depth in this paper, beginning from goals and taking it ahead to technical details. Business Goals–To begin with the Auditor needs to verify what business goals have driven this decision? And is the implementation aligned to these goals.  Very common business goals would be like– cost savings. Would there be a considerable reduction in the infrastructure-related capital expenditure of the banks?  Would it help bank by removing their dependency on IT maintenance or hiring a highly technical person and achieve scalable good performance?  In-depth assessment of the driving factors that led to the decision of agreement to Community Cloud could be a good starting point.  The buy-in of the IT Steering Committee and creation of the Cloud Policy Statement could provide a basis for further work.  Has there been a formal vendor viability assessment by the bank prior to being a party to the community cloud? Regulatory Compliance and Needs - Banks need to comply with multiple national and international regulations while handling customer data. Many banks need that the nancial details of the customer must stay within the geographical boundaries of the country. DigitQ.in
  • 20.
    Q!Digitz                                        Vol 1            Aug 2019 The auditor here can ask for the inventory of the mandatory compliance needs of the bank. Depending on the business needs, there could be set of compliance needs common across all banks and the rest would be case basis for the speci c bank.  Are all the regulatory and compliance needs satis ed by the CSP? How can that be veri ed?  Would the CSP provide certi cates that validate adherence to compliance needs?  Extending on the same lines whether the CSP can be audited?  With the banks, the challenge gets more tough. With increasing globalization and changing nancial scenarios, certain compliances not applicable may become mandatory in the near future. For these scenarios, would the CSP have the ability to comply and provide support? So, in this community cloud, an auditor needs to focus on speci c compliance needs of the bank and the controls CSP deploys for the assurance. Reliability and Availability - In the digital age and with features like mobile banking, availability of the applications becomes a very critical factor. CCID (Cloud Computing Incidents Database) has shown Cloud outages ranging from few minutes to 48 hours, which amply shows Cloud is not immune to outages. In our example of IaaS for Community Banking Cloud–IaaS delivery model would be used for computing, storage infrastructure along with certain services like account management, message queue service, database service, etc.  The auditor should verify the contract to understand services o ered by the CSP and its impact due to outages.  Geographical diversity of data center architecture and its fault tolerance.  Availability management processes of CSP and BCP of CSP.  Impact of non-availability of the database on application and transactions in process.  What communication mechanism is agreed between the CSP and Bank in case of such outages.  Impact analysis by the bank which has helped to establish the RTO and RPO baselines and the subsequent agreement by CSP.  Contingency plan developed by Bank for outage periods. Interoperability and Portability In the fast-changing business landscape, they may sometimes require it to change the CSP. There could be multiple reasons for doing so. In these scenarios, it makes sense to assess portability and interoperability. Not doing so may cause a risk of being stuck to the vendor. From an IaaS perspective, the storage capability of the CSP would be of highest concern. Interoperability would not be a major issue with IaaS because the banks would own applications themselves. Hence, there would be no impact on application interfaces. DigitQ.in
  • 21.
    Q!Digitz                                        Vol 1            Aug 2019  After migration of infrastructure to a new vendor, would the existing CSP release the IP’s?  After potential termination of contract, portability of data and metadata (for e.g. format of the output/extract from the vendor) and purging of data by the service provider. This data remanence poses a higher security threat and auditor needs to double check the mechanisms enforced by the former CSP after release of data regarding storage media.  The CSP should have agreed to and evidence clearing and the Sanitization approach used. Auditor should refer to certi cates here which speci cally mention Media Sanitization like NIST (800-88) guidelines ensuring the right compliance by CSP  The auditor can review the CSP’s data destruction policy, if accessible. Security and Data Privacy - These forms the meat of the entire audit. Data security and privacy are core to any business having customers' sensitive data, and banking quali es for extra scrutiny. Whether the bank has access to a security audit report of the CSP We can segregate this topic in multiple areas as below – Physical Security What guarantees are provided by CSP to assure the physical level security of data centers, storage, and network resources? Network Security  Is there an agreement to have access to network level logs of the CSP?  Agreement to investigate and collect forensic level data?  How are the IP’s released by the CSP? And how are they re-assigned?  SIEM (Security Incident and Event Management) like Firewall, IPS and IDS of the CSP  CSP may collect syslog les. Has there been a risk assessment done to understand what all data is going in syslog les (like authentication and authorization details). The auditor needs to question the bank to understand the inventory of this data.  Regular upkeep, patching, and hardening processes used by CSP  If this community CSP is hosting data of multiple banks, what preventive measures are taken to ensure Bank A cannot intentionally / accidentally gain access to the database of Bank B. These are technically achieved by logical isolation using the hypervisor layer.  Access controls to the hypervisors Data Security We can further subdivide this critical topic into multiple arenas as below - DigitQ.in
  • 22.
    Q!Digitz                                        Vol 1            Aug 2019  Both data in transit and rest should be encrypted. For data in transit auditor needs to ensure mechanisms like use of HTTPS/TLS (with forward secrecy), IPSEC and SSH are employed.  Steps taken by the bank for safety of the encryption keys  Does the CSP provide a clear backup and data archival policy, which gives assurance of data recovery in the event of an unfortunate incident.  Data classi cation to identify what sensitive data resides in the cloud and what controls it applies to an accidental deletion of data including archived data?  Recommended certi cations from CSP are (not limited to) ISO 27001, PCI-DSS & PA-DSS. Additionally, IDRBT recommends Cloud Security Framework, SOC1 and SOC2. Auditor can verify these certi cations of CSP  Auditor needs to ensure the mechanism used to protect data in transit by verifying use of HTTPS/TLS (with forward secrecy), IPSEC and SSH Data Privacy - Privacy is accountability to collect, process, disclose, store and destroy data that could help in identifying an individual. There is no speci c consensus on what it means to be private data. You might have seen the irony many times in banks–where Aadhar card copies are just lying on the desk. This lenient approach is a strong “NO” for privacy in the cloud. KPMG has a de ned data life cycle as –  Auditor can review the SLA’s set for privacy of data in contract  Is there any penalty clause associated if privacy is breached.  With increasing awareness of data privacy and discussions on it in Parliament, there could be further enforcement of regulations. This could have an impact like CSP’s could be termed as una liated parties and data privacy regulations would be more stringent. Should such a scenario arise auditor can verify the competency of the CSP to align with the regulations. Data Loss–Events are beyond human control like oods, earthquakes could be a potential cause to data loss along with human or technical errors.  Is there an agreeable policy in place to recover the data? They can achieve this if the CSP has a concurrent data storage facility.  The auditor can also demand evidence of proactive testing records by Bank and CSP– for data loss scenarios. This would provide enough assurance of data retrieval, should an event occur. Must have terms in the contract - Considering the criticality of the operations and the catastrophic impact of failure puts the auditor in the critical situation to identify and ensure measures taken. DigitQ.in
  • 23.
    Q!Digitz                                        Vol 1            Aug 2019  Reviewing the GRC of CSP. If possible, reports on risk assessment, controls and monitoring them should be presented to the bank by the CSP periodically.  Reviewing whether Bank has done analysis on forensic data it needs to collect and whether it is agreed with CSP along with the capture process. This is crucial from the legal aspect. Termination and Exit Clauses–Auditor needs to review the contract to understand agreement between the Bank and CSP in case of termination and closure. Such an event could occur in multiple cases like CSP closing operations, dispute with CSP, transferring operations to another competitor CSP.  No image or data is withheld by CSP and use this as a bargain.  Clear and well-established policies are de ned and agreed should such a scenario arise.  Legal implications and penalty clauses in the contract for misuse of residual data by CSP The above given thoughts are a way of making compliance practices stronger and derive meaningful outcome from the Audits performed in cloud. DigitQ.in
  • 24.
    Q!Digitz                                        Vol 1            Aug 2019 Measuring Robotic Process Automation Productivity Ecosystem of Human bots Often Companies are interested in knowing what is productivity when human and Bots Co-work together. Shall we consider the bots as one human or shall we consider them as an equivalent of many humans as they work longer than human working time? And some disagree and say the bot is faster than human so is that considered as super human equivalent. There is a genuine interest in de ning the measure of productivity in the Robotic Process Automation (RPA) context. If we try to understand more carefully the need is not exactly to know how to I convert the bots to human, but to derive e ectiveness or return on Investment in the RPA exercise itself. Larger Context of Measurements Before we do the deep dive in to RPA productivity, let us understand how the evolution of measurement varies over time with maturity. This will set the context on to what extent we need to address the challenge. Every new technology upgrade, which can signi cantly impact the delivery of results are concentrated rst on Coverage across, then as more areas already started implementing we focus on Productiveness measures and control measures, later move to Prediction and Improvement based measures, then we move to Innovation or transformation measures. This is like a cycle in every time a new technology or delivery model raises. So today the need is more to understand how much we apply RPA in our projects and is there a way we can say ROI is high as we expected. The resultant question to support this is, What is the productivity of RPA and how we can measure it. Th e question of a bot is how many people is not exactly what the companies want to do, but to know when they will get ROI and how the RPA is helping in results. Movement to more detail productivity measures is not far away. Comparative Productivity Gain The context of projects where we apply RPA varies, and it’s tough to have like-to-like comparison. The simpler way to handle this problem is to compare the e ort spend to do certain tasks with the known level of quality and then after implementation of RPA what is the level of e ort spent to do the same task with same scope and to the same level of Quality. DigitQ.in
  • 25.
    Q!Digitz                                        Vol 1            Aug 2019 This would ensure the boundary of operation remains the same and the parameters are also remained the same, as we measure the e ort impact. In this model, we measure the outcome produced and e ort spent together. We can use the unit of e ort spent towards outcome for productivity. However, the outcome itself can have a multi-dimension in such cases we may or may not use a composite weightage. The below given picture depicts a case of Application maintenance service, how the E ort computation model with outcome can be used. In this outcome and e ort based productivity approach, we can start measuring them after 3 months of operation. We can either apply a by the cost of human to account the bot, as few works by junior resource the bot has taken over or with another method of one license is standard 9 hours. Alternatively, if a bot runs for 24 hrs without interruption, then we may take 3 person equivalents. Under any case, we are trying to compare the outcome produced by the unit of e ort , before RPA and after RPA. In a better way, we can go for cost reduction by introducing RPA services. In this model, it did the work before the RPA period for a scope with a certain cost and after RPA it reduces the cost. The cost computation here will consider all the cost involved in Initial and Operational cost of the bot. This costing will also help in knowing when is the cut-o period by which the RPA investment will start giving bene ts. This method is simpler and provides better clarity in results. However, what if , the RPA focus itself is not “Cost Reduction” but to improve compliance , Improve service Quality, Improve Turn Around Time, in such cases these changes has to be converted as Cost gure to add to saving. In RPA, everyone agrees that it pays bene t, but it's di cult to evaluate exactly to what extent it pays back. This happens because not the entire service is getting applied with RPA , but often a portion of the work. Hence data collection itself has many challenges, hence the above given methods can help us focus macro number to get the right feelers. Productivity in Upcoming days The next phase of measuring productivity can touch on how the licenses are utilized, how do we improve the processes rst to an optimal stage and then apply RPA. We would move to speed up and reduce the RPA scripting process and start measuring the development practices too. DigitQ.in
  • 26.
    Q!Digitz                                        Vol 1            Aug 2019 RPA itself will have many improvement measures to aid in building productivity through RPA. The Simpler way for now is to concentrate on cost for similar quality in a scope , before and after RPA. Based on which we can increase the coverage for RPA across platforms. My Corner! DigitQ.in
  • 27.
    Q!Digitz                                        Vol 1            Aug 2019 Editor Page Who moved my jobs! Who made my toolbox un t! Where is my business! The questions have emerged in the minds of Quality Analyst more often than ever. The landscape of information technology projects is changing, so does the business delivery model. Reinvent and Re- energize Quality concepts to the new Digital World! This is the automation era of the technology industry, where codes are automated, and so does delivery environment is! There is nothing wrong in the old techniques of Quality Assurance which we used successfully, just the context is changed and we no longer deal only with the human to control variations, but with bots and Tools. The environments have become simpli ed and have many controls in-built within the service delivery platform. We are slowly moving from prevent failure with more time to detect failure quickly. Speed of delivery and collaborative working models achieved high priority, and they aid the same with advanced tools. The quality de nition and techniques should understand the priorities and be integrated with tools. The days are not the long where assurance part is automated, and it automates Audits. This is the time for the experts in quality to share and develop the new focus and concepts to build the business with support of tools. The evolution of cloud, data speed, tools, collaborative methods, and lean applications have changed the delivery models, and it indeed made many of our previous challenges void. So our focus is to get aligned with newer delivery models and technologies to meet client expectations with the Digitized Delivery Platforms. Let's start a new journey of collaborative progress with everyone in the Quality Analyst and compliance eld to refocus our energy in building new concepts, sharing and de ning the best for future. This platform is for everyone to express their views and thoughts of future openly without bound by the legacy. Come, let’s join our thoughts, let’s refocus and rede ne concepts for Digital Quality! Connect and share your views and articles here ‘Contact@digitq.in’ DigitQ.in
  • 28.
    Q!Digitz                                        Vol 1            Aug 2019 Content Disclaimer The content expressed in this magazine is the thought process of individuals, and we are not responsible for validating or ltering the contents. We do not ask for references, as the new ideas and innovative thoughts need not evolve from past. We encourage people to share information that can create the spark in the reader's mind and apply in their own context. We leave the judgement over the content to the reader. If any plagiarism is present, then the author is responsible, and we don’t own the content. Similarly, the concepts presented in this magazine are assumed to be open for the public to analyze and apply in their own cases. We encourage this magazine as an open platform to share new thoughts, concepts, results, and good practices that can lead to digitizing quality to t the new digital era. Articles are Welcome We welcome articles over new ideas, concepts, pilots, analysis and researches on Quality to shape up the focus to today’s Digital IT World. You can share your thoughts with us in complete article and use this platform to communicate thousands of IT professionals and build a stronger new Quality Community. To digitize the new world of Quality you can send your article to us at ‘Contact@digitq.in’ Points to care: Use smaller paragraphs, own images or copyright free images and limit the articles within 3 or 4 pages. Avoid marketing and objectionable languages. Also by sending the articles to us, you are agreeing for publishing to public media for free usage. Edited by Vishnu Varthanan Moorthy DigitQ.in