This document proposes a peer-evaluation model (PETMS) to help project managers select team members for software development projects. The model enhances traditional peer evaluation by having team members evaluate each other on criteria like productivity, performance, and personality. It also incorporates evaluations from the project manager. Evaluations are stored in a central repository and can guide selection for future projects. The document describes the PETMS model and research evaluating it through a case study. Results showed peer evaluation can improve the team selection process and help project managers make better choices by considering past feedback and member ratings.
A Peer-Evaluation Model for Team Member Selection in a Distributed Development Domain (PETMS)
1. A peer-evaluation model for team member selection in a
distributed development domain (PETMS)
Abdulrahman M. Qahtani
Computer Science Dept.
Taif University
Taif, Saudi Arabia
amqahtani@tu.edu.sa
ABSTRACT
Selecting the team members is one of the greatest challenges of a
software development project. This challenge is heightened by the
sector’s shift to global software development and distributed
projects, due to the multiple teams and members involved. This
study’s proposed model enhances the peer-evaluation approach, a
common technique in the domain of human resources, to support
software development project managers in selecting and allocating
members to the optimum team for maximum harmony. The
proposed model, known as PETMS, facilitates the assessment of
potential team members across multiple projects by using peer
evaluations of each member together with an evaluation by the
project manager. The study discusses the impact of peer evaluation
on the assessment of both individual team members and the whole
team. The results provide clear indications about team members
that can help a project manager to select and allocate them to the
right place. In addition, the study reveals that there is no clear
correlation between individual- and whole-team assessment. This
can impact on a project significantly.
Keywords
Team; software development; peer evaluation; distributed
development; global software development; case study
1. INTRODUCTION
In recent decades, the software industry has made a significant shift
to a more intelligent, global and agile means of creating software
through globally distributed teams [1]. This change has prompted
it to focus on several aspects of generating high-quality and cost-
effective software, such as having teams that are highly skilled in
communication, besides being in possession of the requisite ability
to achieve top performance. At a specific turning point in the
software industry, namely publication of the Agile Software
Development Manifesto, people took on a crucial role in terms of
this manifesto’s focal values: “individuals and interactions over
processes and tools” [2]. Moreover, finding highly skilled human
resources is strong motivation for any organization to globalize its
development [3][4]. There are many reasons for the critical
importance of hiring experienced and highly skilled project team
members, for instance the challenge of communicating across
global boundaries and building trust in distributed teams, and these
impact on the quality of the software produced as well as the
development team’s overall performance [5].
Creating a team for a software development project starts by
choosing its members very carefully [6][7]. This step demands a
significant proportion of a software project manager’s attention [8].
It depends on the type of software developed and how the team is
to be structured. While it is a challenge in any project to choose the
best development team members [6], it is more complicated when
it is extremely complex, large or geographically distributed [6]. The
challenges experienced by geographically distributed teams include
communication and coordination [9]. To overcome them, all team
members should be selected on the basis of specific criteria, such
as experience and specialization in the required domain [6].
Much research has been conducted on the topic of selecting
development team members, from various perspectives. For
instance, some articles have considered team members from the
point of view of the project leader and his or her selection [8]. Other
research has looked at specific criteria to select and assign team
members. Yet other studies have indicated that stakeholders play a
central part in allocating the team members to a project, especially
as performance across the whole project, as well as various project
aspects such as cost, time and quality [10] in bespoke and agile
projects [11][12]. However, in assessing team performance,
personality has been considered more often than the technological
aspects [13].
To allocate the right people to a development project, there is a need
first to evaluate team members against criteria such as productivity,
performance and personality [13]. Productivity in software
development projects is defined as the average measure of the
efficiency of production [14]. In this study, it indicates the number
of requirements completed over the time scale of the project. In this
study, this factor is measured subjectively by peers and project
managers. Performance encompasses the quality of the software
developed and ability to deal with clients' requirements
competently, using specialised tools. Team members come to know
their colleagues on the project and gain a clear idea of both their
skills and their ability to complete tasks, in the same way as they
learn about their productivity, thus they have the opportunity to
evaluate each other in terms of their performance. The third criteria
used in this study as a factor for evaluating team members is
personality. In a distributed development project, one of the critical
challenges is communication between teams and team members
[15]. In this study, personality signifies behaviours, communication
skills and involvement in the organisation, all of which are
necessary if team members are to conduct a project successfully
and achieve its goals. Therefore, this study has tried to find a way
to support project managers in selecting a harmonious development
team, using the PETMS model to provide a clear assessment by
ranking people during their previous project through peer-
evaluation techniques. Furthermore, by conducting a case study
using the proposed model on real projects, this research discusses
how this evaluation provides a clear ranking at both the team and
individual level.
This rest of this article is structured as follows: Section 2 reviews
studies on both the selection and allocation of team members in
various software development domains and the peer evaluation
approach in the domain of human resource management. Section 3,
as well as giving the scope of this article, briefly provides the
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
1 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
2. background to the evaluation of team members for a distributed
project and explains how selection has been conducted previously,
then presents its research questions. Section 4 presents and explains
the proposed model. Section 5 outlines the methodology. In section
6, the experimental results are presented, explained and discussed.
The last section gives the conclusion of this research and the
directions for future work.
2. RELATED WORK
Recently, we have witnessed definite leaps forward in the software
industry in several areas, such as the move from collocated to
distributed software development using an agile approach. This
significant change has been strong motivation to secure competitive
advantage in terms of social skills resources, besides improved
quality and reduced cost [12]. Although many companies take this
step to secure the known advantages of the distributed
development, doing so involves challenges [4]. One of the critical
issues is the recruitment of experienced and highly skilled people
as project team members able to fulfil the specific requirements
[12][16]. According to [17], although human resources play a vital
role in any project, choosing appropriate staff is absolutely critical
to software development projects. Thus, the authors focused on
investigating how team members may be selected and allocated at
various stages of the development process. Some studies have
conducted evaluations of current approaches to allocating and
assigning resources to development projects, such as [10] [8] and
[13]. In [10], the proposed methodology considers the complexity
of and differentiation between software projects and the real
capabilities of team members, relative to the specific skills required
at each stage of a project. The study described in [18] indicates that,
according to the literature, the allocation of human resources is the
most critical activity in any software development project. It
identifies several constraints in the allocation of staff. Furthermore,
using the COCOMOII approach, it provides guidelines on
estimating a developer’s productivity. Most research in human
resources for software development projects focuses on allocating
team members to the appropriate project. Therefore, few studies
propose an approach to the assessment of team members and thus
facilitation of their selection by the project manager.
Research in the field of human resources has shown that peer
evaluation is a vital way to assess team member performance [19].
In [20], the authors find that, “In theory, peer evaluation appears to
be an effective method of collaborative assessment”. They
conducted a review of all aspects of peer evaluation and found that
much research has focused on the success of peer evaluation among
students in the learning sector. In another significant study [21], the
impact of peer evaluation on group members’ performance and
behaviour was studied by comparing peer evaluation to other
assessment approaches, for instance self-assessment and group
assessment. The study recommends peer evaluation and feedback
for small-group assessment on the basis of its significant results
[21]. In view of the lack of research on evaluating team members
in distributed development projects and successful studies of peer
evaluation in small teams, this study proposes a model taking a
peer-evaluation approach.
2.1 Teamwork in distributed software
development
Distributed software development has been paid significant
attention since the 1990s [12]. The process is defined as the
activities, methods and associated practices and technologies of
both development teams and individuals [22] in the context of
projects undertaken by work organization teams. There are two
forms of teamwork in software development projects: collocated,
where the team is allocated to a single site [23]; and distributed,
where the team members are geographically distant, so that the
actors in the software development process are physically separated
from each other [24]. Nowadays, the software industry tends to the
latter, distributing the development process between decentralized
teams around the world in a phenomenon known as GSD [25].
This distributed development increases the challenge in creating
harmonious teams of highly skilled members. According to [26],
the shift from collocated to distributed projects involves challenges
as well as benefits due to cultural differences and organizational
boundaries in aspects such as the selection of team members. In
addition, a software development project may be of one of several
styles and types of design, such as many teams in a single city or,
alternatively, a single team spread across several countries [27].
This study focuses on the distributed domain’s selection of team
members as, without a clear approach or model, it presents extreme
difficulties in understanding members’ performance, productivity
and personality. These challenges are resolved by the model
proposed in this article.
3. BACKGROUND AND PROBLEM AREA
This section gives a brief review of teamwork to show its
importance to the development process in the distributed
development domain. The second part of this section presents the
study’s objectives and research questions.
3.1 Research Objectives and Research
Questions
This research discusses the selection of team members, as this issue
is highly significant to software development projects. The research
target is a software development team functioning across several
domains, for example in a distributed or collocated project. The
main problem in this scenario is to find the right people to work
together as a harmonious team.
The objectives of this research are to:
• Find a way to support a project manager to select a harmonious
development team.
• Discuss how using peer evaluation can make the allocation of
team members easier.
• Discuss the impact of a low evaluation of a team member on
the evaluation of the whole team.
To achieve the above objectives, the following research questions
are:
RQ1. How can we select team members to form a harmonious
team?
RQ2. Does allocating team members who are evaluated differently
impact on the evaluation of the whole team?
4. Peer-Evaluation team member selection
model (PETMS)
Selecting software development team members is a crucial
challenge in most software development projects. The challenge
increases in large projects or distributed projects, where the
question is how to build a harmonious team that can work to
overcome all obstacles and ensure that the output satisfies both the
project manager and stakeholders. Furthermore, the key to clear and
profound knowledge of the team members to help the project
managers to form their team in the approach known as "peer-
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
2 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
3. evaluation", whereby team members assess their peers against
various criteria.
The PETMS model in Figure 1 is designed to meet this study's
objectives and to answer the research question about finding a
simple way to select team members. It uses data from previous
projects’ peer evaluations of individuals by their team members.
The model starts by defining the requirements of a new project in
terms of its team. Defining the main requirements and
characterizing the project help to establish the skills and
professions needed by the team’s members. At this stage, the
required skills are usually listed as check boxes for managers to
tick. After choosing the skills necessary to the project, the project
manager identifies individual members from a dashboard of a
central data repository of staff, viewing the data to support their
selection, using parameters such as project and requirement type to
gauge their work against the criteria. The dashboard can show the
data from the repository in several forms; for example, it can order
members on the basis of their results or of their most similar project.
Figure 1. Proposed model for team member selection using the
peer-evaluation technique
The next is the key stage, evaluating the selected team members
against criteria vital to all development projects to complete the
cycle. In the PEMTS model, the evaluation is conducted at two
levels. The first is conducted by peers using three criteria:
productivity; personality; and performance (as defined earlier). The
novelty of this model lies in its enhancement of the peer-evaluation
concept originating in the field of human resources, whereby other
members of the project team undertake peer evaluation of each
individual, responding to a prepared form (Table 1) of questions on
a Likert scale from 1 to 10 in order to evaluate and rate all team
members. This evaluation is passed to the following stage, known
in the PEMTS model as team member rating (Figure 1).
The second level of evaluation is undertaken by the project
manager, as this role is responsible for the project’s achievement
and success. The manager evaluates each member on the project
using the same form that was used at the peer-evaluation level,
applying a weight to their evaluation. Once they have finished their
evaluation, it passes to the next stage, which is team member rating.
At this stage, the evaluation by the project manager is calculated,
using the average peer evaluation score of each member to
construct a rating metrics from 1 to 10, termed the mean of the
evaluation process.
Table 1: Evaluation form for team members (select 1 to 10 to
evaluate team members as 1 lowest and 10 highest mark)
Project … Member 1 Member 2 Member ….
Productivity 1-10 1-10 1-10
Performance 1-10 1-10 1-10
Personality 1-10 1-10 1-10
The next step in the proposed model (PEMTS) is to save the rating
of each member on the project to a central repository, as in Figure
1, then to use those data as a guide to the selection of team members
in any subsequent development project.
The model merges peer evaluation with the project manager’s
evaluation, as happens traditionally in any development project. In
combination, the two construct a rating system that is informed by
proper knowledge and evaluation to form a harmonious team for a
variety of environments and circumstances. Moreover, such a team
can overcome the obstacles and challenges facing a project that is
conducted as a distributed development.
5. Research Methodology
To achieve the research objectives and answer the research
questions, this study started by reviewing previous research on
evaluating and allocating human resources in a software
development project. The findings of the literature review focused
on distributed projects that use a light approach to the management
of a development scheme with just a few members in each project,
as in an agile approach. This study proposes a model, PETMS, to
make the selection of team members easier by enhancing peer
evaluation during the development to provide a precise assessment
of the team members themselves for use post-project.
A case study was conducted to test and evaluate the proposed
PETMS model. By using data collected from the selected case
study, a tight investigation focused on a real industrial
environment. The results reflect the reality and prove that the
impact of using peer evaluation in a development project is to
improve the process of team selection and guide project managers
in using previous projects’ feedback and ratings of team members.
5.1 Case study
Much research in software engineering and software development
employs case study as a suitable methodology to investigate
phenomena in their natural context [26], so in this study real data
from development projects are also used to evaluate the proposed
model. In [27], the authors define the various types of case study in
software engineering research. They point to exploratory study as
the primary approach to understanding actions and practices to
establish what is happening in the reality of the project, generating
ideas and hypotheses for new research or resolving existing
problems.
Conducting a case study is one approach to building or confirming
a theory [28]. In exploring a theory or reading the reality of any
process, a survey is appropriate to deal with the volume of data
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
3 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
4. collected from specific people, organizations or contexts [28].
Besides, [26] indicates that valuable understanding can be derived
from a case study to extend our knowledge and experience of a
specific subject. In the area of software development and projects,
case-study methodology is suitable to study phenomena in their
natural context, for instance the selection of team members and
evaluation against defined criteria [27]. Thus, this study performed
an actual case study in a distributed development project.
The study started by selecting a company with broad and significant
experience in developing customisation products for distributed
customers at their sites (in accordance with its wishes, the company
will remain anonymous). Although it has multiple teams across
more than five countries in the Middle East, the study was
conducted in Saudi Arabia. The company conducts more than 18
distributed projects simultaneously, covering the sectors of
healthcare, academia and business intelligence. It also customises
products to clients’ needs by developing software solutions and
products. Moreover, in terms of its development methodology and
testing procedures, the company has established protocols to
manage customers’ requests through local customisation and
development processes. This helps this study both to understand the
nature of the company’s work and to conduct an experiment using
the agile approach to examine the impact of agility practices on
testing activities.
To build the conceptual design of the model using real data, the first
step was to conduct a contextual study of a project at this company
to understand the nature of the development process and business
model used.
5.2 Case-study context and design
The case study’s second step was to select three projects from the
company’s current 18 to conduct an evaluation of the proposed
PETMS model. The selected projects needed to comprise the same
team members in order to measure the impact on the same people
at different times and on different projects. Each chosen project had
five members, working in development roles such as analysis,
design, coding and testing. All the team members were evaluated
by the others in the team responding on a Likert scale from 1 to 10
to questions against three criteria: productivity; performance; and
personality.
The group members were each asked to fill out the form in Table
1, which is an evaluation against the three defined criteria on a scale
from 1 to 10. After collecting the evaluations back from the peers
and also the project manager, the data were statistically tested to
obtain their meaning and meet the study’s objectives.
5.3 Data collection and analysis Procedures
This case study used data collected from three selected projects
conducted by five team members over nearly three weeks of
development.
Data analysis was conducted in two steps. The first used the data
stored in a central repository to establish individual evaluations,
calculating the average evaluation of each member on each project.
The second step was at project manager level. It used the Kruskal-
Wallis Test [29] on the data of each project to establish the
differences between the project teams against the three defined
criteria. This statistical test was selected as there is no normality in
the collected data; moreover, the sample was too small to use a
parameter statistical test. The Kruskal-Wallis Test data are in rank
form, as shown in the formula in Figure 2.
𝐻 = (𝑁 − 1)
∑ 𝑛 𝑖
𝑔
𝑖=1
(𝑟̅𝑖−𝑟̅)2
∑ ∑ (𝑟𝑖𝑗−𝑟̅)
2𝑛 𝑖
𝑗=1
𝑔
𝑖=1
, where:
𝑛𝑖 is the number of observations in group 𝑖
𝑟𝑖𝑗 is the rank (among all observations) of observation 𝑗
from group 𝑖
𝑁 is the total number of observations across all groups
𝑟̅𝑖 =
∑ 𝑟𝑖𝑗
𝑛 𝑖
𝑗=1
𝑛 𝑖
is the average rank of all observations in
group 𝑖
𝑟̅ =
1
2
(𝑁 +1) is the average of all the 𝑟𝑖𝑗
Figure 2. Kruskal-Wallis Test formula
6. Result and Discussion
In this study, the evaluation process was designed to show the
output of the proposed model and the results of individual
evaluations of team members using a peer-review evaluation
approach. At this point, the individual peer-evaluation results of a
team member involved in all three projects are as shown in Figure
3.
Figure 3. Evaluation results of a team member involved in all
three projects
The line chart in Figure 3 shows clear differences in the evaluations
of this team member across the projects. Regarding productivity,
the average of the team’s evaluations of this individual fluctuated
from 2.75 of 5 in Project 1 to 3.5 in Project 2, then came down
sharply to 1.5 in Project 3. On the other hand, the average for the
individual’s performance fluctuated from 2.75, up to 3.5, then fell
to 1.5. The personality criteria likewise fluctuated in terms of
average team evaluation: in Project 2, the highest evaluation of all
the projects was achieved for this individual, namely 4.25, yet it
was only 2.75 for Project 3 and 3.25 for Project 1. However, this
variation in the evaluation scores had no significant impact on the
whole-team evaluation, as shown in Table 1, which displays the
inferential results of the Kruskal-Wallis Test. This indicates that
there is no statistically significant difference between the teams for
each project in terms of overall productivity, performance and
personality, despite variations in the evaluations of the individual
team members.
The results presented above for the average peer evaluations across
the three projects lead to the case study’s emphasis of its significant
findings, meeting the study’s objectives and answering its research
questions, as defined earlier (section 3). These findings confirm the
impact of using a peer evaluation approach to assess the team
members on a development project and make the selection process
more manageable and transparent. As the results show, the crucial
factors of team member allocation and selection are from other
3.25
2.25
2.75 2.75
3.5
1.5
3
4.25
2.5
0
1
2
3
4
5
Project 1Project 2Project 3
productivity Performance Personality
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
4 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
5. team members’ point of view, and these support project managers
to form a harmonious team within a short period of time. Also, it
attempts to avoid any conflict and miscommunication between
team members caused by personality clashes that may arise, should
some members be seen to display low performance or productivity,
as this may affect the team’s coherence in working together.
Table 2. Result of statistical test of comparing three teams’
evaluations
5 Members
in each
project team
Mean Rank Chi- Square df P value
Productivity Project 1 10.00 1.618 2 0.445
Project 2 7.40
Project 3 6.60
Performance Project 1 9.20 0.740 2 0.691
Project 2 6.80
Project 3 8
Personality Project 1 8.60 0.966 2 0.608
Project 2 9.00
Project 3 6.40
Notably, the results reveal that any significance in team members’
differences against various criteria is eliminated, across the three
selected projects. This indicates that, as shown in Table 2, in terms
of productivity and performance any mis-selection of team
members has only a low impact on the overall project.
7. CONCLUSION AND FUTURE WORK
This article presented a study on the selection of team members by
proposing a new model for their evaluation over multiple projects
by peer evaluation against three criteria: productivity; work
performance; and personality. This study discusses ways of
presenting a team member’s average evaluation for each of their
projects to make selection easier, and the impact of each team
member’s scores on the total evaluation for each project. It shows
that there is no statistically significant difference in a project’s
evaluation when team members are scored differing evaluations.
The data used in this study were real data collected from a case
study involving three distributed projects at a single company.
Future work can improve the model to provide a ranked evaluation
and suggestions for team member selection, using machine-
learning algorithms to make the selection more intelligent and
supportive.
8. References
[1] B. Ramesh, L. Cao, and R. Baskerville, “Agile requirements
engineering practices and challenges: an empirical study,” Inf.
Syst. J., 2010.
[2] M. Daneva et al., “Agile requirements prioritization in large-scale
outsourced system projects: An empirical study,” J. Syst. Softw.,
vol. 86, no. 5, pp. 1333–1353, 2013.
[3] S. Sarker, M. Ahuj, and S. Sarker, “Work-life conflict of globally
distributed software development personnel: An empirical
investigation using Border Theory,” Inf. Syst. Res., 2018.
[4] D. E. Damian and D. Zowghi, “RE challenges in multi-site
software development organisations,” Requir. Eng., vol. 8, no. 3,
pp. 149–160, 2003.
[5] B. Boehm and R. Turner, “Management challenges to
implementing agile processes in traditional development
organizations,” IEEE Softw., 2005.
[6] J. A. Espinosa, S. A. Slaughter, R. E. Kraut, and J. D. Herbsleb,
“Familiarity, complexity, and team performance in
geographically distributed software development,” Organ. Sci.,
vol. 18, no. 4, pp. 613–630, 2007.
[7] S. Faraj and L. Sproull, “Coordinating expertise in software
development teams,” Manage. Sci., 2000.
[8] M. André, M. G. Baldoquín, and S. T. Acuña, “Formal model for
assigning human resources to teams in software projects,” Inf.
Softw. Technol., vol. 53, no. 3, pp. 259–275, 2011.
[9] A. M. Qahtan, G. B. Wills, and A. M. Gravell, “Customising
software products in distributed software development,” Int.
Conf. Inf. Soc., no. March, pp. 92–98, 2013.
[10] L. C. e Silva and A. P. C. S. Costa, “Decision model for allocating
human resources in information system projects,” Int. J. Proj.
Manag., vol. 31, no. 1, pp. 100–108, 2013.
[11] R. A. B. Ouriques, K. Wnuk, T. Gorschek, and R. B. Svensson,
“Knowledge management strategies and processes in agile
software development: A systematic literature review,” Int. J.
Softw. Eng. Knowl. Eng., 2019.
[12] A. M. Qahtani, G. B. Wills, and A. M. Gravell, “Customising
software products in distributed software development A model
for allocating customisation requirements across organisational
boundaries,” in International Conference on Information Society,
i-Society 2013, 2013, pp. 92–98.
[13] N. Gorla and Y. W. Lam, “Who should work with whom?,”
Commun. ACM, vol. 47, no. 6, pp. 79–82, 2004.
[14] S. Delaney and D. Schmidt, “A productivity framework for
software development literature review,” in ACM International
Conference Proceeding Series, 2019.
[15] A. M. Qahtani, G. B. Wills, and A. M. Gravell, “A framework of
challenges and key factors for applying agile methods for the
development and customisation of software products in
distributed projects,” Int. J. Digit. Soc., vol. 4, no. 3, pp. 806–813,
2016.
[16] R. Celkevicius and R. F. S. M. Russo, “An integrated model for
allocation and leveling of human resources in IT projects,” Int. J.
Manag. Proj. Bus., vol. 11, no. 2, pp. 234–256, 2018.
[17] S. Mostinckx, T. Van Cutsem, S. Timbermont, E. G. Boix, É.
Tanter, and W. De Meuter, “Mirror-based reflection in
ambienttalk,” Softw. - Pract. Exp., 2009.
[18] K. Siau, X. Tan, and H. Sheng, “Important characteristics of
software development team members: An empirical investigation
using Repertory Grid,” Inf. Syst. J., 2010.
[19] D. F. Baker, “Peer assessment in small groups: A comparison of
methods,” J. Manag. Educ., 2008.
[20] M. Q. Patton, “Evaluation, knowledge management, best
practices, and high quality lessons learned,” Am. J. Eval., 2001.
[21] M. W. Ohland, R. A. Layton, M. L. Loughry, and A. G. Yuhasz,
“Effects of behavioral anchors on peer evaluation reliability,” J.
Eng. Educ., 2005.
[22] R. S. Pressman, Software Engineering A Practitioner’s Approach
7th Edition. 2010.
[23] R. Prikladnicki, J. Audy, and R. Evaristo, “Distributed software
development: Toward an understanding of the relationship
between project team, users and customers,” in ICEIS 2003 -
Proceedings of the 5th International Conference on Enterprise
Information Systems, 2003.
[24] D. Damian and D. Moitra, “Guest editors’ introduction: Global
software development: How far have we come?,” IEEE Softw.,
2006.
[25] A. Aslam et al., “Decision support system for risk assessment and
management strategies in distributed software development,”
IEEE Access, 2017.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
5 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
6. [26] B. Sengupta, S. Chandra, and V. Sinha, “A research agenda for
distributed software development,” in Proceedings -
International Conference on Software Engineering, 2006.
[27] R. Prikladnicki, J. L. N. Audy, D. Damian, and T. C. De Oliveira,
“Distributed software development: Practices and challenges in
different business strategies of offshoring and onshoring,” in
Proceedings - International Conference on Global Software
Engineering, ICGSE 2007, 2007.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
6 https://sites.google.com/site/ijcsis/
ISSN 1947-5500