SlideShare a Scribd company logo
DRAFT 
Proceedings of the 2011 ASME International Design Engineering Technical Conferences and 
Computers and Information in Engineering Conferences 
2011 IDETC/CIE 
August 28 - 31, 2011, Washington, DC USA 
DETC2011-48441 
TOWARDS A TOOL FOR CHARACTERIZING THE PROGRESSION OF ACADEMIC 
1 Copyright © 2011 by ASME 
RESEARCH 
Ming Leong 
Department of Mechanical Engineering 
Massachusetts Institute of Technology 
Cambridge, MA USA 
Abdelaziz Bazoune 
Department of Mechanical Engineering 
King Fahd University of Petroleum and Minerals 
Dhahran, Saudi Arabia 
Victor Tang 
Department of Mechanical 
Engineering 
Massachusetts Institute of 
Technology 
Cambridge, MA USA 
David R. Wallace 
Department of Mechanical 
Engineering 
Massachusetts Institute of 
Technology 
Cambridge, MA USA 
Warren Seering 
Department of Mechanical 
Engineering 
Massachusetts Institute of 
Technology 
Cambridge, MA USA 
ABSTRACT 
The importance of process in successful and effective 
technology and product development is widely recognized in 
industry. Tools, such as technology readiness levels (TRLs) and 
various metrics, have been developed and successfully 
employed to guide and strategically plan R&D processes, 
allocate resources, and calibrate expectations. Similarly, one 
might hypothesize that academic research might also benefit 
from similar tools that would assist both researchers and 
funding organizations. A research assessment tool should: 1) 
facilitate planning and communication; 2) effectively gauge 
progress; and 3) accommodate and capture the diverse scope of 
academic research. However, the inherent open-endedness and 
exploratory nature of research makes it difficult to concretely 
characterize research progress. 
This paper begins to develop an academic research 
measurement tool. The proposed Research Maturity Levels 
(RMLs) tool divides research activities into four main 
components: 1) background knowledge, 2) problem and 
question formulation, 3) procedures and results, and 4) 
resources. Within each component, the RML steps researchers 
through a process of increasing maturity levels. Additionally, 
each component includes mechanisms to formalize iterations 
and “eureka” moments—when directions and plans may change 
based upon new knowledge. Preliminary evaluation suggests 
that the tool has promise as a comprehensive measurement tool. 
It is hoped this work will result in a tool that can facilitate 
planning, help to measure and communicate research progress, 
and encompass the diverse scope of academic research goals. 
INTRODUCTION 
The importance of process in technology and product 
development has been widely recognized in industry, academia, 
and education. In product development, a systematic process 
guides the abilities of designers, allows for objective evaluation 
of results, helps to manage complex projects, facilitates 
teamwork, and provides a foundation for educating students [1, 
2]. To track and measure the progression of new products and 
technologies, industry professionals have developed tools like 
Technology Readiness Levels (TRLs). These widely adopted 
tools help to guide and strategically plan R&D processes, 
allocate resources, and calibrate expectations between industry-based 
researchers and managers [3-6]. However, these needs 
are not unique to industry—the objectives of strategic planning, 
resource allocation, and communicating expectations are just as 
relevant in academic environments [7]. While many books have
2 Copyright © 2011 by ASME 
been written to guide budding researchers through the research 
process [8-11], these guides fall short on delivering the 
planning and communication benefits that TRLs provide to 
product development initiatives. However, efforts by the 
authors to apply TRLs to academic research illustrated that they 
are not well-suited for academic research. TRLs succeed when 
there is a tangible, well-defined product or technology 
objective. The goals in research are quite different and are often 
open-ended. 
As such, this paper begins to develop a new tool to 
systematically characterize academic research and research 
progress, with the goal of providing feedback to researchers 
and funding organizations. Like a TRL assessment, the 
framework guides researchers through a process of increasing 
maturity, but the structure enables researchers to use their own 
research methodologies and practices to achieve a diverse set of 
self-defined goals. 
The work draws inspiration from Technology Readiness 
Levels which are proven to aid in strategic planning, 
communication, and gauging progress in product development 
but do not suit the nature of academic research. The authors 
also draw upon research productivity metrics that have been 
applied in academic environments, but are not well suited for 
strategic planning and facilitating progress as they do not 
provide or suggest pathways for research progression. With the 
researcher as the primary user, our hope is that this work will 
lay a foundation for a comprehensive tool that can accomplish 
all these goals — facilitating planning and communication, 
accommodating and capturing the diverse scope of academic 
research, and effectively gauging progress. 
DISCUSSION OF PRIOR WORK 
Technology Readiness Levels 
Technology Readiness Levels (TRLs) are a numerical scale 
of technology maturity developed by NASA in the 1980’s. 
They formalize 10 (0-9) steps of technology maturation, 
defining a map ending with readiness for deployment in 
operational conditions [12]. Each successive level requires that 
the technology is functional in an increasingly realistic 
environment [13]. The General Accounting Office has endorsed 
the use of TRLs in military research and the Department of 
Energy and the Department of Defense have also adopted TRLs 
[14]. Although originally written for spacecraft development, 
TRLs have now been appropriately re-worded to be used in 
industries ranging from software to biomedical products [14, 
15]. Other industry organizations similarly tailor TRLs for their 
area of technological expertise [16]. Graettinger [15] and 
Tillack [17] also illustrate how TRLs can be used and 
implemented in technologies that contain multiple systems that 
interface with one another. 
While TRLs are commonly used as a method of 
communicating the state of a technology as it is being 
transferred from research units to business units for further 
development [15]. 
The success of Technology Readiness Levels in various 
contexts demonstrates how an appropriate instrument can be 
designed for planning, communication, and gauging progress. 
By acting as a common language that managers and researchers 
both understand, the maturity of a technology can be assessed 
by non-technical decision-makers [12, 18]. They, in turn, can 
effectively allocate resources, develop business propositions, or 
strategically plan product launches [18]. 
However, when applied to academic research projects, 
TRLs fall short in encompassing the broad and diverse scope of 
research. TRLs, with their linear structure and wording gears 
towards technology development, are geared for tangible 
product development — a TRL of 9 is defined as “actual 
system proven through successful mission operations” [14]. 
While research does sometimes result in commercial products 
or deployed technology systems, most research is concerned 
with the development of new knowledge [3]. This makes the 
TRL scale inappropriate for research as it is often unclear what 
the final “end product” will be. Additionally, the “end products” 
of research, once established, vary widely from papers to new 
procedures, algorithms, or measurement tools [3], most of 
which correspond to the lowest, or lowest few, levels of TRL 
maturity. TRLs lack sufficient resolution for early stage 
academic research. 
TRLs’ singular focus on assessing progress on the end 
deliverable is another aspect that limits its usefulness for 
research. Issues such as leadership and team formulation, or 
building expertise and underlying knowledge, etc. are not 
considered. However, prior art suggests that such factors are 
important to successful research and are forms of progress. 
Acknowledging these limitations, other researchers have 
proposed alternatives to, and modifications, of TRLs. Some 
examples include: Tang and Otto’s [19] proposal of a multi-enterprise 
readiness scale; Valerdi and Kohl’s [20] approach to 
risk management; Heslop et al.’s [21] cloverleaf modification 
to technology transfer; Fomin et al.’s [22] incorporation of 
Quality Functional Deployment with TRLs; and Hick’s 
modification of TRLs for product development [23]. 
Academic Research Productivity 
Literature on research productivity is focused on 
determining the factors that make a productive researcher and 
research environment. Data are collected through surveys and 
correlated with measures of productivity, such as publications 
and citations [24, 25]. In a literature review, Bland [26] 
identifies 12 characteristics of a research-conducive 
environment. Other studies have linked mentorship [27], 
collaboration [28], funding [29-32], organizational climate [33], 
group size [34], participation in professional organizations [35, 
36] with productivity. This body of literature alludes to the 
importance of a researcher's environment in order to conduct 
research effectively. 
Where TRLs fall short in capturing enablers within an 
academic research environment, literature on research 
productivity provides useful insights in this aspect. These 
studies begin to form a picture of what a supportive, productive 
research environment entails and might even be useful in 
predicting success [37]. However, they do not assist researchers 
in conveying the state of their current research project and do 
not help with planning and communicating research 
progression
3 Copyright © 2011 by ASME 
Industrial Research and Development Evaluation 
Metrics 
Research and development evaluation methods use 
quantitative and semi-quantitative measures to communicate to 
managers the impact and contributions of R&D departments 
[38]. Extensive literature has been written on how to develop 
effective evaluation methods using surveys, bibliometrics, peer 
reviews, etc. [39-43]. Evaluations are conducted for a number 
of reasons, namely to motive researchers, monitor the progress 
of research, evaluate profitability, aid coordination and 
communication, and stimulate organizational learning [44]. 
Empirical studies have found some success in using research 
evaluations [44, 45], but not without challenges such as 
difficulty in choosing appropriate metrics, acceptance of 
performance measures, and time lags between evaluation and 
tangible impacts, etc. [46, 47]. See work by Geisler [48], 
Marjanovic [49], Hauser [50], and Kerssen-van Drongelan[46] 
for literature reviews. 
While insightful and useful in industrial contexts, the area 
of research evaluation metrics is not directly applicable to this 
paper. The results and correlations provide a snapshot of a 
researcher programs historical performance, rather than 
providing a pathway of progression for his or her research. This 
work is focused on developing a tool that can guide researchers 
and provide them with a tool for communication and planning, 
rather than quantifying success. 
This review of the literature highlights some of the 
strengths of industrial methods for measuring progress. 
However, none of the methods discussed fulfill the 
requirements of a comprehensive measurement tool for 
academic research: facilitate planning and communication, 
effectively gauge progress, and encompass the dimensions of a 
research program. Table 1 summarizes the methods’ strengths 
and weaknesses. 
TABLE 1: COMPARISON OF STRENGTHS AND 
WEAKNESSES OF TRLS AND RESEARCH PRODUCTIVITY 
AND EVALUATION METRICS 
TRLs Research Productivity 
and Evaluation metrics 
Planning/ 
Communication 
Strong Weak 
Gauging Progress Strong Weak 
Encompassing 
dimensions of a 
successful 
research program 
Weak Strong 
RESEARCH PROGRESSION LEVELS: A FIRST 
ATTEMPT 
A research-oriented permutation of TRLs, which was 
named Research Progression Levels (RPLs), was first 
proposed. Table 2 compares Technology Readiness Levels and 
the Research Progression Levels. We asked eight mechanical 
engineering faculty members to use the TRLs and RPLs to map 
one their research active projects. Although the Research 
Progression Levels were more appropriate than Technology 
Readiness Levels for academic research, it was still confusing 
to researchers. It became clear that neither scale was successful 
in accomplishing the goals of this research. 
TABLE 2: COMPARISON OF TRLS AND RPLS, AN INITIAL, 
HIGHLY PARALLEL TOOL USED TO PINPOINT 
DEFICIENCIES IN APPROACH 
Technology Readiness 
Levels (citation TRA) 
Research Progression 
Levels 
1 Basic principles observed 
and reported 
Expert knowledge of prior 
art, its meaning and value 
2 Technology concept and/or 
application formulated 
Key research questions are 
selected and justified 
3 Analytical and 
experimental critical 
function and/or 
characteristic proof of 
concept 
Research strategy and 
conceptual structure of the 
REP are fully formulated 
4 Component and/or 
breadboard validation in a 
laboratory environment 
Initial test of research 
strategy complete. Results 
have been interpreted and 
documented 
5 Component and/or 
breadboard validation in a 
relevant environment 
Research procedures tested 
under representative 
conditions 
6 System/subsystem model or 
prototype demonstration in 
a relevant environment 
Complete detailed research 
procedures are defined. 
7 System prototype 
demonstration in an 
operational environment 
Using the research 
procedures, results have 
been systematically 
collected under 
representative conditions. 
8 Actual system completed 
and qualified through test 
and demonstration 
Research procedures have 
been completed under the 
agreed robust regime 
9 Actual system proven 
through successful mission 
operations. 
Results have been 
interpreted 
10 REP is complete and 
delivered 
The academic researchers that were interviewed voiced 
several key concerns: 
• The numerical scaling of the TRLs and RPLs gave a sense 
of a grade — faculty felt hesitant about being “scored”, 
despite efforts to emphasize that the objective of the tool 
was to provide guidance and structure for moving research 
forward. 
• Faculty felt strongly that the process needed to recognize 
that the objective of each project was distinct. 
• When communicating with funding organizations, 
collaborators, etc., an important first step in a project is to
FIGURE 1: VISUALIZATION OF RESEARCH PROJECT 
PROFILE 
4 Copyright © 2011 by ASME 
reach an agreement about the goal of the research. This did 
not appear in the proposed RPL. 
• Iterative processes and serendipitous changes in direction 
are critical and vital to research but not accounted for in 
the tool. 
• Research contains multiple components, and progress is 
often made at different rates within each component, 
unlike the linear structure of RPL tool. 
In addition to this overall feedback, we observed that the 
Research Progression Levels were confusing and difficult for 
researchers to use, despite multiple revisions based on 
feedback. The interviews revealed that the RPLs had a 
fundamental flaw: they treated both “things researchers do” 
(e.g., literature search, question selection) and progress on these 
aspects as part of the progression scale. Researchers might 
tackle different aspects of research in a different order or in 
parallel, rather than a set linear order as the RPLs assume. This 
led to the idea of identifying key research components that can 
then be independently assessed for progression. 
RESEARCH MATURITY LEVELS: A SECOND 
ITERATION 
The second iteration grew out of the idea that key research 
components can be assessed independently along unique scales 
of maturity, rather than progress. Progress implies a journey 
towards a known, final end state. A research project’s end state 
is often unknown and ever changing, so instead the maturity, or 
level of refinement and focus, is tracked. 
The academic research measurement tool developed based 
upon this insight, named Research Maturity Levels (RMLs), 
takes the form as a worksheet. From a completed worksheet, 
the researcher is able to describe a profile of their project in 
terms of its maturity on a number of axes. 
Multiple axes of progress 
The one-dimensional, final deliverable-focused TRL and 
initial RPL scales measured a linear progression towards a final 
technology or goal. However, even within product 
development, it is well understood that technological progress 
is not the only area important to a successful technology 
development program [23]. 
Drawing upon research process literature, the 
characterization of research was first divided into separate, 
distinct axes or components: background knowledge, problem 
and question formulation, procedures and results, and 
resources. This way maturity or progress on each axis or 
component can be assessed independently. 
The four research components may progress in parallel or 
sequentially, and may advance at different rates depending on 
preliminary findings or stage of research, etc. Thus, each 
research component is given its own maturity scale appropriate 
for the given component. Graphically, these research 
components could be visualized as progress bars that plot the 
maturity of each component, depicting a profile of the project, 
as shown in Figure 1. 
The maturity levels within a component can be more 
clearly illustrated through an example of the problem and 
question formulation component (see Figure 2). The first 
maturity level, Q1, requires only a “broad statement of the 
problem”. From that broad statement, Q2 requires a “list of 
preliminary research questions.” Q3 takes the preliminary 
questions and narrows them down to a more focused “statement 
of research questions.” The research questions are then parsed 
into a “structured statement of problem for investigation” (Q4), 
which finally result in a “testable, concrete problem statement” 
(Q5). 
The other components guide the researcher through a 
similar progression. The background knowledge component 
sets a framework for a well-organized, thorough review of the 
literature. The procedures and results component ensures 
meaningful results by requiring an increasing level of fidelity 
from the experimental procedures. Finally, the resource 
component helps researchers build a productive environment in 
terms of physical, intellectual, human, monetary, and 
management resources. 
Maturity levels defined by milestones 
In the wording of TRLs, each level requires that a 
technology is able to achieve a specific deliverable. For 
example, in order to achieve a TRL of 6, a “ system/subsystem 
model or prototype demonstration in a relevant environment” is 
required [14]. Similarly, each maturity level of the RMLs is 
comprised of a series of characteristic research milestones. 
Using a scale, researchers judge the completeness of each 
milestone for their project. The milestones and their progress 
then act as a guide to determine the project’s current maturity 
level for the given component. For example, figure 3 shows the 
milestones used for the problem and question formulation 
component. The first maturity level (Q1) states that a “broad 
statement of [the] problem” has been established. In order to 
gauge progress on this milestone, the RML is used to assess 
whether the researchers have a “thorough understanding of 
literature and engineering context”, that they have “identified
5 Copyright © 2011 by ASME 
challenges” and “problems appropriate for research”, and that 
they have articulated the “impact and importance of question in 
[a] broader context.” 
Maturity Level 
Q1 Broad statement of problem 
Q2 List of preliminary research questions 
Q3 Statement of research questions 
Q4 Structured statement of problem for 
investigation 
Q5 Testable, concrete problem statement 
Iteration accounting 
An important feature of Research Maturity Levels is 
iteration accounting—capturing the notion that trying multiple 
approaches exploring problem, or just doing things over, is an 
essential part of learning and does represent significant research 
maturation. In traditional TRLs, if a project makes three 
different attempts or exploration approaches for a breadboard 
experiment, the project is evaluated as a TRL 4, whether it is on 
the first or final experiment. The iterations are not documented 
as progress, even though new breakthroughs in understanding 
might have been made. Unlike TRLs, the RMLs attempt to 
document and encompass these revisions so that progress made 
through iteration is recognized. These iterations may occur 
because of new findings, planned progressions, or unforeseen 
complications. At the beginning of each component, there is a 
space to record the current iteration, number of anticipated 
iterations, and the reason for the current iteration. See Figure 3 
for an example of iteration accounting in the context of a the 
problem and question formulation component. 
Flexible end goals 
Technology Readiness Levels are very specific in their end 
product — an “actual system proven through successful 
mission operations”. However, in research the “end product” 
usually is not as concrete and well defined as a functioning 
technology; rather, the ultimate goal of research is to uncover 
and develop new knowledge. This knowledge can take the form 
of a journal article, patent, new procedure, etc. In order to 
accommodate the diverse range of research endeavors, the 
RMLs progress towards a mastery of the research components, 
culminating in a formal, peer-reviewed dissemination of 
research findings as the final maturity level of the Procedures 
and Results component. 
Development of a productive research environment 
The literature recognizes the importance of a strong 
research environment in order for researchers to be productive. 
To assist researchers in building a productive environment 
around them, the resource component of the RMLs acts as a 
checklist for necessary monetary, intellectual, human, physical, 
and managerial resources. This component is slightly different 
from the other components: first, it is further decomposed in to 
sub-components resource types and second, it is structured as 
less of a progression and more of a checklist. See Figure 6 for 
an example of the research support environment sub-component. 
Verification of RMLs’ efficacy 
The authors conducted five preliminary studies, guiding 
mechanical engineering faculty and graduate students through 
the tool to ensure that the prompts are properly interpreted and 
the form was understood. These usability studies verified that 
the structure and format of the Research Maturity Levels were 
successful in creating a profile of a variety of research projects. 
Initial tests also suggest that if this tool is used periodically, 
progression in a research project can be tracked, whether 
progress was made through new iterations or higher maturity 
levels within an iteration. The authors plan to continue testing 
the tool to improve its usability and to understand the 
functional requirements from researchers. 
Our preliminary tests with faculty also uncovered 
interesting observations of how this tool may be used. First, we 
observed a difference in the way a scientists and engineers 
complete the tool. More science-minded faculty tended to shy 
away from classifying open-ended milestones, such as a 
“thorough understanding of literature and engineering context”, 
as “complete”, noting that there are always new things to 
discover and learn, despite their best efforts to understand the 
space. Faculty with engineering mindsets felt more confident 
classifying a milestone as “complete” when a clear solution had 
emerged from their work. 
Another observation was that faculty did not always seem 
familiar with the idea of using a structured research process. 
For example, some faculty had trouble distinguishing “testing 
of core research procedures in approximate conditions” from 
“applying the full research procedure”. In product design and 
technology development, the practice of low-level models to 
verify core concepts is well-known and widely used. Bringing 
this concept to research may improve the research process and 
make it more efficient. 
FIGURE 2: EXAMPLE OF MATURITY LEVELS 
FROM THE PROBLEM AND QUESTION 
FORMULATION COMPONENT
Problem and Question Formulation 
This component begins with the statement of a problem of interest. It then progresses through the development of research questions, finally concluding with a testable well-defined 
FIGURE 3: FULL EXAMPLE OF PROBLEM AND QUESTION FORMULATION COMPONENT 
6 Copyright © 2011 by ASME 
problem statement or hypothesis with clear research expectations and scope. 
Current Iteration: 1 2 3 4 5 6 7 8 9 10 
Final anticipated iteration: 1 2 3 4 5 6 7 8 9 10 Don’t know 
Explanation for current iteration: No previous 
iteration 
Pre-planned logical 
progression 
New insights 
obtained in prior 
iteration 
Approach of prior 
iteration did not 
work out as hoped 
Other reasons 
Please respond to the following prompts considering only the current iteration. 
Once you have completed this section, put a check beside the maturity level that best describes the current state of research. 
Maturity Level Milestones Notes 
Q1 
Broad statement of 
problem 
      Thorough understanding of literature and engineering context 
      Identified challenges 
      Identified problems appropriate for research 
Overall 
      Impact and importance of question in broader context articulated 
      
Q2 
List of preliminary 
research questions 
      Key characteristics of problem are articulated 
      List of many research questions related to problem 
Overall       
Q3 
Statement of 
research questions 
      Focused study related to preliminary questions 
      Prioritization of promising research questions 
      Short list of critical research questions formed 
Overall 
      Most critical question(s) chosen 
      
Q4 
Structured 
statement of 
problem for 
investigation 
      Formulate research hypothesis/ problem requirements 
      Hypothesis/ problem divided in to manageable subproblems 
      Feasibility of addressing hypothesis/ problem determined 
Overall       
Q5 
Testable, concrete 
problem statement 
      Variables (dependent, independent, control, intervening) identified 
      Focused research expectations and scope defined 
      Operational definitions and assumptions stated 
Overall       
Problem and Question Formulation maturity level: 
Q1: Broad statement of 
problem 
Q2: List of preliminary 
research questions 
Q3: Statement of 
research questions 
Q4: Structured statement 
of problem for 
investigation 
Q5: Testable, concrete 
problem statement 
Not Applicable 
Not started 
Less than half 
About half completed 
More than half 
Almost completed/ Completed
Research Support Environment This resource is not relevant 
This sub-category refers to the nurturing of connections with relevant professionals to build the emotional and intellectual support environment necessary for research. 
Not Applicable 
Not started 
Less than half 
About half completed 
More than half 
Almost completed/ Completed 
Maturity Level Notes 
Advocacy of 
supervisors and/or 
administration 
received 
      Support hiring students and staff in place 
      Support streamlining purchasing in place 
      Support building and maintaining lab in place 
      Support to obtain money in place 
      Support to operate projects in flexible, timely manner in place 
FIGURE 4: EXMAPLE OF THE RESEARCH SUPPORT ENVIRONMENT SUB-COMPONENT OF THE RESOURCES COMPONENT 
7 Copyright © 2011 by ASME 
FUTURE WORK 
This work describes steps toward developing a 
comprehensive tool for characterizing the progression of 
academic research. The proposed Research Maturity Levels 
tool seeks to fulfill the requirements of: facilitating planning 
and communication; accommodating and capturing the diverse 
scope of academic research; and effectively gauging progress. 
Initial tests suggest that the proposed RMLs successfully 
accomplish these goals and that further improvements may 
increase it efficacy. 
Through usability case studies, the structure and wording 
of the RMLs will be refined until it can be widely distributed 
for use by a ranger of researchers. This final form will be 
deployed as online software. 
Since our primary focus has been to develop a 
comprehensive tool for researchers, we have also been 
exploring ways to provide researchers with more feedback than 
just the profile of their research. Inherent to all research is risk. 
Many researchers intuitively conduct risk-value assessments 
while choosing projects or new directions. We hope that from 
the project profile developed by the Research Maturity Levels, 
we will be able to provide researchers with a formal risk profile 
of their project, suggesting ways to manage risks, set priorities 
and intermediate term goals, and to make decisions. 
ACKNOWLEDGMENTS 
The authors would like to thank the King Fahd University 
of Petroleum  Minerals (KFUPM) in Dhahran, Saudi Arabia, 
for funding the research reported in this paper through the 
Center for Clean Water and Clean Energy at MIT and KFUPM. 
REFERENCES 
1. Ulrich, K.T., Product design and development / Karl 
T. Ulrich, Steven D. Eppinger. 3rd ed. ed. 
2. Pahl, G., K. Wallace, and L.T.M. Blessing, 
Engineering design : a systematic approach. 3rd ed. 
2007, London: Springer. xxi, 617 p. 
3. Garrett-Jones, S. and D. Aylward, Some recent 
developments in the evaluation of university research 
outcomes in the United Kingdom. Research 
Evaluation, 2000. 9(1): p. 69-75. 
4. Kirby, M., D. Mavris, and M. Largent, A process for 
tracking and assessing emerging technology 
development programs for resource allocation, in in 
1st AIAA ATIO Forum. 2001. 
5. Kenley, C. and T. Creque, Predicting technology 
operational availability using technical maturity 
assessment, in Systems Engineering. 1999. p. 198-211. 
6. Godener, A. and K. Soderquist, Use and impact of 
performance measurement results in RD and NPD: 
an exploratory study, in R  D Management. 2004. p. 
191-219. 
7. Brew, A., The nature of research : inquiry in 
academic contexts. 2001, London ; New York: 
Routledge/Falmer. 205 p. 
8. Bock, P., Getting it right : R  D methods for science 
and engineering. 2001, San Diego, Calif. London: 
Academic. xviii, 350 p. 
9. Leedy, P.D. and J.E. Ormrod, Practical research : 
planning and design. 9th ed. 2010, Upper Saddle 
River, NJ: Merrill. xvii, 336 p. 
Overall       
Intellectual research 
environment 
established 
      Relevant network of mentors and advisors identified 
      Appropriate academic collaborators identified and contacted 
      Relationships with relevant industry professionals established 
Overall       
Comments: please use the space below to explain the current state of any of the research support environment.
8 Copyright © 2011 by ASME 
10. Blaxter, L., C. Hughes, and M. Tight, How to 
research. 3rd ed. 2006, Maidenhead ; New York: 
Open University Press. xii, 287 p. 
11. Kumar, R., Research methodology : a step-by-step 
guide for beginners. 2nd ed. 2005, London: SAGE. 
xix, 332 p. 
12. Mankins, J.C., Technology readiness levels, in White 
Paper, April. 1995. 
13. Mackey, R. and R. Some…, Readiness levels for 
spacecraft information technologies, in Aerospace 
Conference. 2005. 
14. OF THE DIRECTOR OF DEFENSE …, O., 
Technology Readiness Assessment (TRA) Deskbook, in 
oai.dtic.mil. 2009. 
15. Graettinger, C.P., et al., Using the Technology 
Readiness Levels Scale to Support Technology 
Management in the DOD's ATD/STO Environments. 
2002. 
16. Cornford, S.L. and L. Sarsfield, Quantitative methods 
for maturing and infusing advanced spacecraft 
technology, in Aerospace Conference, 2004. 
Proceedings. 2004 IEEE. 2004. 
17. Tillack, M., et al., An evaluation of fusion energy 
RD gaps using technology readiness levels, in 
FUSION SCIENCE AND TECHNOLOGY. 2009. p. 
949. 
18. Dubos, G.F., J.H. Saleh, and R. Braun. Technology 
readiness level, schedule risk and slippage in 
spacecraft design: Data analysis and modeling. 2007. 
19. Tang, V. and K.N. Otto, Multifunctional Enterprise 
Readiness: Beyond the Policy of Build-Test-Fix Cyclic 
Rework. 2009. 
20. Valerdi, R. and R.J. Kohl, An approach to technology 
risk management, in Engineering Systems Division 
Symposium. 2004. 
21. Heslop, L., E. McGregor, and M. Griffith, 
Development of a Technology Readiness Assessment 
Measure: The Cloverleaf Model of Technology 
Transfer, in The Journal of Technology Transfer. 
2001. p. 369-384. 
22. Fomin, P., T. Mazzuchi, and S. Sarkani, Incorporating 
Maturity Assessment into Quality Functional 
Deployment for Improved Decision Support Analysis, 
Risk Management, and Defense Acquisition, in 
Proceedings of the World Congress on Engineering 
and Computer Science. 2009. 
23. Hicks, B., Larsson, A., Culley, S., Larsson, T. A 
methodology for evaluating technology readiness 
during product development. in International 
Conference on Engineering Design, ICED '09. 2009. 
Stanford, CA. 
24. Dundar, H. and D. Lewis, Determinants of research 
productivity in higher education, in Research in 
Higher Education. 1998. p. 607-631. 
25. Toutkoushian, R.K., et al., Using publications counts 
to measure an institution's research productivity, in 
Research in Higher Education. 2003. p. 121-148. 
26. Bland, C. and M. Ruffin, CHARACTERISTICS OF A 
PRODUCTIVE RESEARCH ENVIRONMENT - 
LITERATURE-REVIEW, in Academic Medicine. 1992. 
p. 385-397. 
27. Steiner, J.F., et al., Assessing the role of influential 
mentors in the research development of primary care 
fellows, in Academic Medicine. 2004. p. 865. 
28. Landry, R., N. Traore, and B. Godin, An econometric 
analysis of the effect of collaboration on academic 
research productivity, in Higher Education. 1996. p. 
283-301. 
29. Geuna, A. and B.R. Martin, University research 
evaluation and funding: an international comparison, 
in Minerva. 2003. p. 277-304. 
30. Lindsey, D., THE RELATIONSHIP BETWEEN 
PERFORMANCE INDICATORS FOR ACADEMIC 
RESEARCH AND FUNDING - DEVELOPING A 
MEASURE OF RETURN ON INVESTMENT IN 
SCIENCE, in Scientometrics. 1991. p. 221-234. 
31. Guellec, D. and B. de la Potterie, From RD to 
productivity growth: Do the institutional settings and 
the source of funds of RD matter?, in Oxford 
Bulletin of Economics and Statistics. 2004. p. 353-378. 
32. Gulbrandsen, M. and J. Smeby, Industry funding and 
university professors' research performance, in 
Research Policy. 2005. p. 932-950. 
33. Ramsden, P., DESCRIBING AND EXPLAINING 
RESEARCH PRODUCTIVITY, in Higher Education. 
1994. p. 207-226. 
34. Louis, K., et al., Becoming a scientist: The effects of 
work-group size and organizational climate, in 
Journal of Higher Education. 2007. p. 311-+. 
35. Hitchcock, M., et al., Professional networks: The 
influence of colleagues on the academic success of 
faculty, in Academic Medicine. 1995. p. 1108-1116. 
36. Ynalvez, M. and W. Shrum, Professional networks, 
scientific collaboration, and publication productivity 
in resource-constrained research institutions in a 
developing country, in Research Policy. 2011. p. 204- 
216. 
37. Hekelman, F., S. Zyzanski, and S. Flocke, 
SUCCESSFUL AND LESS-SUCCESSFUL 
RESEARCH PERFORMANCE OF JUNIOR 
FACULTY, in Research in Higher Education. 1995. p. 
235-255. 
38. Werner, B. and W. Souder, Measuring RD 
performance - State of the art, in Research- 
Technology Management. 1997. p. 34-42. 
39. Poh, K., B. Ang, and F. Bai, A comparative analysis of 
RD project evaluation methods, in R  D 
Management. 2001. p. 63-75. 
40. Coccia, M., A basic model for evaluating RD 
performance: theory and application in Italy, in R  
D Management. 2001. p. 453-464. 
41. Ojanen, V. and O. Vuola, Coping with the multiple 
dimensions of RD performance analysis, in 
International Journal of Technology Management. 
2006. p. 279-290.
9 Copyright © 2011 by ASME 
42. Butler, L., Using a balanced approach to 
bibliometrics: quantitative performance measures in 
the Australian Research Quality Framework, in Ethics 
in Science and Environmental Politics(ESEP). 2008. 
p. 83-92. 
43. Garcia-Aracil, A., A. Gracia, and M. Perez-Marin, 
Analysis of the evaluation process of the research 
performance: An empirical case, in Scientometrics. 
2006. p. 213-230. 
44. Chiesa, V., et al., Designing a performance 
measurement system for the research activities: A 
reference framework and an empirical study, in 
Journal of Engineering and Technology Management. 
2008. p. 213-226. 
45. Kim, B. and H. Oh, An effective RD performance 
measurement system: survey of Korean RD 
researchers, in Omega-International Journal of 
Management Science. 2002. p. 19-31. 
46. Drongelen, I. and A. Cooke, Design principles for the 
development of measurement systems for research and 
development processes, in RD Management. 1997. 
p. 345-357. 
47. Garcia-Valderrama, T. and E. Mulero-Mendigorri, 
Content validation of a measure of RD effectiveness, 
in R  D Management. 2005. p. 311-331. 
48. Geisler, E., The metrics of technology evaluation: 
where we stand and where we should go from here, in 
International Journal of Technology Management. 
2002. p. 341-374. 
49. Marjanovic, S.H., Stephen; Wooding, Steven, A 
Historical Reflection on Research Evaluation Studies, 
Their Recurrent Themes and Challenges. Technical 
Report. 2009. p. 55. 
50. Hauser, J., Research, development, and engineering 
metrics, in Management Science. 1998. p. 1670-1689.

More Related Content

Viewers also liked

Karen Resume and Cover
Karen Resume and CoverKaren Resume and Cover
Karen Resume and Cover
Karen Schwaab
 
Modularizing Arcihtectures Using Dendrograms1
Modularizing Arcihtectures Using Dendrograms1Modularizing Arcihtectures Using Dendrograms1
Modularizing Arcihtectures Using Dendrograms1
victor tang
 
Water's Wonders S15
Water's Wonders S15Water's Wonders S15
Water's Wonders S15
Afton Aikens
 
Old-Fashioned Mountain Fun W14
Old-Fashioned Mountain Fun W14Old-Fashioned Mountain Fun W14
Old-Fashioned Mountain Fun W14
Afton Aikens
 
innovation conference [3]
innovation conference [3]innovation conference [3]
innovation conference [3]
victor tang
 
2009 ASME final
2009 ASME final2009 ASME final
2009 ASME final
victor tang
 
Modular Def PSS DETC 20120217
Modular Def PSS DETC 20120217Modular Def PSS DETC 20120217
Modular Def PSS DETC 20120217
victor tang
 
Practica tabla numerica
Practica tabla numericaPractica tabla numerica
Practica tabla numerica
Daniel Martinez Segovia
 
Global outreach, EarthingSport
Global outreach, EarthingSportGlobal outreach, EarthingSport
Global outreach, EarthingSport
Mark Aaron Saus
 
Iconic Adventures S16
Iconic Adventures S16Iconic Adventures S16
Iconic Adventures S16
Afton Aikens
 
Complicatedness_Tang
Complicatedness_TangComplicatedness_Tang
Complicatedness_Tang
victor tang
 
Eric Stewart Resume
Eric Stewart ResumeEric Stewart Resume
Eric Stewart Resume
Eric Stewart
 
EarthingSport, Global sport and Environmental Outreach Brochure
EarthingSport, Global sport and Environmental Outreach BrochureEarthingSport, Global sport and Environmental Outreach Brochure
EarthingSport, Global sport and Environmental Outreach Brochure
Mark Aaron Saus
 

Viewers also liked (13)

Karen Resume and Cover
Karen Resume and CoverKaren Resume and Cover
Karen Resume and Cover
 
Modularizing Arcihtectures Using Dendrograms1
Modularizing Arcihtectures Using Dendrograms1Modularizing Arcihtectures Using Dendrograms1
Modularizing Arcihtectures Using Dendrograms1
 
Water's Wonders S15
Water's Wonders S15Water's Wonders S15
Water's Wonders S15
 
Old-Fashioned Mountain Fun W14
Old-Fashioned Mountain Fun W14Old-Fashioned Mountain Fun W14
Old-Fashioned Mountain Fun W14
 
innovation conference [3]
innovation conference [3]innovation conference [3]
innovation conference [3]
 
2009 ASME final
2009 ASME final2009 ASME final
2009 ASME final
 
Modular Def PSS DETC 20120217
Modular Def PSS DETC 20120217Modular Def PSS DETC 20120217
Modular Def PSS DETC 20120217
 
Practica tabla numerica
Practica tabla numericaPractica tabla numerica
Practica tabla numerica
 
Global outreach, EarthingSport
Global outreach, EarthingSportGlobal outreach, EarthingSport
Global outreach, EarthingSport
 
Iconic Adventures S16
Iconic Adventures S16Iconic Adventures S16
Iconic Adventures S16
 
Complicatedness_Tang
Complicatedness_TangComplicatedness_Tang
Complicatedness_Tang
 
Eric Stewart Resume
Eric Stewart ResumeEric Stewart Resume
Eric Stewart Resume
 
EarthingSport, Global sport and Environmental Outreach Brochure
EarthingSport, Global sport and Environmental Outreach BrochureEarthingSport, Global sport and Environmental Outreach Brochure
EarthingSport, Global sport and Environmental Outreach Brochure
 

Similar to Leong Paper 2011

A SYSTEMATIC REVIEW OF UNCERTAINTIES IN SOFTWARE PROJECT MANAGEMENT
A SYSTEMATIC REVIEW OF UNCERTAINTIES IN SOFTWARE PROJECT MANAGEMENTA SYSTEMATIC REVIEW OF UNCERTAINTIES IN SOFTWARE PROJECT MANAGEMENT
A SYSTEMATIC REVIEW OF UNCERTAINTIES IN SOFTWARE PROJECT MANAGEMENT
Allison Thompson
 
Towards a Software Engineering Research Framework: Extending Design Science R...
Towards a Software Engineering Research Framework: Extending Design Science R...Towards a Software Engineering Research Framework: Extending Design Science R...
Towards a Software Engineering Research Framework: Extending Design Science R...
IRJET Journal
 
A systematic review of uncertainties in
A systematic review of uncertainties inA systematic review of uncertainties in
A systematic review of uncertainties in
ijseajournal
 
A guide to deal with uncertainties in software project management
A guide to deal with uncertainties in software project managementA guide to deal with uncertainties in software project management
A guide to deal with uncertainties in software project management
ijcsit
 
Comparing_Methods_for_Large-Scale_Agile_Software_D.pdf
Comparing_Methods_for_Large-Scale_Agile_Software_D.pdfComparing_Methods_for_Large-Scale_Agile_Software_D.pdf
Comparing_Methods_for_Large-Scale_Agile_Software_D.pdf
muazfar131
 
Technology Transfer: Plan for Success
Technology Transfer: Plan for SuccessTechnology Transfer: Plan for Success
Technology Transfer: Plan for Success
Jennifer Flagg
 
Distributed Software Development Process, Initiatives and Key Factors: A Syst...
Distributed Software Development Process, Initiatives and Key Factors: A Syst...Distributed Software Development Process, Initiatives and Key Factors: A Syst...
Distributed Software Development Process, Initiatives and Key Factors: A Syst...
zillesubhan
 
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSAN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
ijseajournal
 
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSAN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
ijseajournal
 
Supporting Information Management in Selecting Scientific Research Projects
Supporting Information Management in Selecting Scientific Research ProjectsSupporting Information Management in Selecting Scientific Research Projects
Supporting Information Management in Selecting Scientific Research Projects
inventionjournals
 
Supporting Information Management in Selecting Scientific Research Projects
Supporting Information Management in Selecting Scientific Research ProjectsSupporting Information Management in Selecting Scientific Research Projects
Supporting Information Management in Selecting Scientific Research Projects
inventionjournals
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 
A Systematic Literature Review Of Requirements Engineering Education
A Systematic Literature Review Of Requirements Engineering EducationA Systematic Literature Review Of Requirements Engineering Education
A Systematic Literature Review Of Requirements Engineering Education
Emma Burke
 
Operations Research
Operations ResearchOperations Research
Operations Research
Dr T.Sivakami
 
An Investigation of Critical Failure Factors In Information Technology Projects
An Investigation of Critical Failure Factors In Information Technology ProjectsAn Investigation of Critical Failure Factors In Information Technology Projects
An Investigation of Critical Failure Factors In Information Technology Projects
IOSR Journals
 
Methodology chapter
Methodology chapterMethodology chapter
Methodology chapter
engrhassan21
 
Aligning IT and business strategy an Australian university ca.docx
Aligning IT and business strategy an Australian university ca.docxAligning IT and business strategy an Australian university ca.docx
Aligning IT and business strategy an Australian university ca.docx
daniahendric
 
Agile Project Management A Systematic Literature Review Of Adoption Drivers ...
Agile Project Management  A Systematic Literature Review Of Adoption Drivers ...Agile Project Management  A Systematic Literature Review Of Adoption Drivers ...
Agile Project Management A Systematic Literature Review Of Adoption Drivers ...
Jeff Nelson
 
32 rcm.org.ukmidwivesTh e latest step-by-step practical g.docx
32 rcm.org.ukmidwivesTh e latest step-by-step practical g.docx32 rcm.org.ukmidwivesTh e latest step-by-step practical g.docx
32 rcm.org.ukmidwivesTh e latest step-by-step practical g.docx
tamicawaysmith
 
Streamlining R&D Case Files at the AFRL
Streamlining R&D Case Files at the AFRLStreamlining R&D Case Files at the AFRL
Streamlining R&D Case Files at the AFRL
cjoesten
 

Similar to Leong Paper 2011 (20)

A SYSTEMATIC REVIEW OF UNCERTAINTIES IN SOFTWARE PROJECT MANAGEMENT
A SYSTEMATIC REVIEW OF UNCERTAINTIES IN SOFTWARE PROJECT MANAGEMENTA SYSTEMATIC REVIEW OF UNCERTAINTIES IN SOFTWARE PROJECT MANAGEMENT
A SYSTEMATIC REVIEW OF UNCERTAINTIES IN SOFTWARE PROJECT MANAGEMENT
 
Towards a Software Engineering Research Framework: Extending Design Science R...
Towards a Software Engineering Research Framework: Extending Design Science R...Towards a Software Engineering Research Framework: Extending Design Science R...
Towards a Software Engineering Research Framework: Extending Design Science R...
 
A systematic review of uncertainties in
A systematic review of uncertainties inA systematic review of uncertainties in
A systematic review of uncertainties in
 
A guide to deal with uncertainties in software project management
A guide to deal with uncertainties in software project managementA guide to deal with uncertainties in software project management
A guide to deal with uncertainties in software project management
 
Comparing_Methods_for_Large-Scale_Agile_Software_D.pdf
Comparing_Methods_for_Large-Scale_Agile_Software_D.pdfComparing_Methods_for_Large-Scale_Agile_Software_D.pdf
Comparing_Methods_for_Large-Scale_Agile_Software_D.pdf
 
Technology Transfer: Plan for Success
Technology Transfer: Plan for SuccessTechnology Transfer: Plan for Success
Technology Transfer: Plan for Success
 
Distributed Software Development Process, Initiatives and Key Factors: A Syst...
Distributed Software Development Process, Initiatives and Key Factors: A Syst...Distributed Software Development Process, Initiatives and Key Factors: A Syst...
Distributed Software Development Process, Initiatives and Key Factors: A Syst...
 
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSAN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
 
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSAN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
 
Supporting Information Management in Selecting Scientific Research Projects
Supporting Information Management in Selecting Scientific Research ProjectsSupporting Information Management in Selecting Scientific Research Projects
Supporting Information Management in Selecting Scientific Research Projects
 
Supporting Information Management in Selecting Scientific Research Projects
Supporting Information Management in Selecting Scientific Research ProjectsSupporting Information Management in Selecting Scientific Research Projects
Supporting Information Management in Selecting Scientific Research Projects
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
A Systematic Literature Review Of Requirements Engineering Education
A Systematic Literature Review Of Requirements Engineering EducationA Systematic Literature Review Of Requirements Engineering Education
A Systematic Literature Review Of Requirements Engineering Education
 
Operations Research
Operations ResearchOperations Research
Operations Research
 
An Investigation of Critical Failure Factors In Information Technology Projects
An Investigation of Critical Failure Factors In Information Technology ProjectsAn Investigation of Critical Failure Factors In Information Technology Projects
An Investigation of Critical Failure Factors In Information Technology Projects
 
Methodology chapter
Methodology chapterMethodology chapter
Methodology chapter
 
Aligning IT and business strategy an Australian university ca.docx
Aligning IT and business strategy an Australian university ca.docxAligning IT and business strategy an Australian university ca.docx
Aligning IT and business strategy an Australian university ca.docx
 
Agile Project Management A Systematic Literature Review Of Adoption Drivers ...
Agile Project Management  A Systematic Literature Review Of Adoption Drivers ...Agile Project Management  A Systematic Literature Review Of Adoption Drivers ...
Agile Project Management A Systematic Literature Review Of Adoption Drivers ...
 
32 rcm.org.ukmidwivesTh e latest step-by-step practical g.docx
32 rcm.org.ukmidwivesTh e latest step-by-step practical g.docx32 rcm.org.ukmidwivesTh e latest step-by-step practical g.docx
32 rcm.org.ukmidwivesTh e latest step-by-step practical g.docx
 
Streamlining R&D Case Files at the AFRL
Streamlining R&D Case Files at the AFRLStreamlining R&D Case Files at the AFRL
Streamlining R&D Case Files at the AFRL
 

More from victor tang

創意Creativity Workshop
創意Creativity Workshop創意Creativity Workshop
創意Creativity Workshop
victor tang
 
US Naval War College 2010
US Naval War College 2010US Naval War College 2010
US Naval War College 2010
victor tang
 
Innova-Con-Flyer-USLetter (1)
Innova-Con-Flyer-USLetter (1)Innova-Con-Flyer-USLetter (1)
Innova-Con-Flyer-USLetter (1)
victor tang
 
ICED 2013 A
ICED 2013 AICED 2013 A
ICED 2013 A
victor tang
 
ICED 2009 FINAL v4
ICED 2009 FINAL v4ICED 2009 FINAL v4
ICED 2009 FINAL v4
victor tang
 
2014 Chinese press
2014 Chinese press2014 Chinese press
2014 Chinese pressvictor tang
 
2014 Barcelona Conference
2014 Barcelona Conference2014 Barcelona Conference
2014 Barcelona Conferencevictor tang
 
Chile 2008
Chile 2008Chile 2008
Chile 2008
victor tang
 

More from victor tang (8)

創意Creativity Workshop
創意Creativity Workshop創意Creativity Workshop
創意Creativity Workshop
 
US Naval War College 2010
US Naval War College 2010US Naval War College 2010
US Naval War College 2010
 
Innova-Con-Flyer-USLetter (1)
Innova-Con-Flyer-USLetter (1)Innova-Con-Flyer-USLetter (1)
Innova-Con-Flyer-USLetter (1)
 
ICED 2013 A
ICED 2013 AICED 2013 A
ICED 2013 A
 
ICED 2009 FINAL v4
ICED 2009 FINAL v4ICED 2009 FINAL v4
ICED 2009 FINAL v4
 
2014 Chinese press
2014 Chinese press2014 Chinese press
2014 Chinese press
 
2014 Barcelona Conference
2014 Barcelona Conference2014 Barcelona Conference
2014 Barcelona Conference
 
Chile 2008
Chile 2008Chile 2008
Chile 2008
 

Leong Paper 2011

  • 1. DRAFT Proceedings of the 2011 ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conferences 2011 IDETC/CIE August 28 - 31, 2011, Washington, DC USA DETC2011-48441 TOWARDS A TOOL FOR CHARACTERIZING THE PROGRESSION OF ACADEMIC 1 Copyright © 2011 by ASME RESEARCH Ming Leong Department of Mechanical Engineering Massachusetts Institute of Technology Cambridge, MA USA Abdelaziz Bazoune Department of Mechanical Engineering King Fahd University of Petroleum and Minerals Dhahran, Saudi Arabia Victor Tang Department of Mechanical Engineering Massachusetts Institute of Technology Cambridge, MA USA David R. Wallace Department of Mechanical Engineering Massachusetts Institute of Technology Cambridge, MA USA Warren Seering Department of Mechanical Engineering Massachusetts Institute of Technology Cambridge, MA USA ABSTRACT The importance of process in successful and effective technology and product development is widely recognized in industry. Tools, such as technology readiness levels (TRLs) and various metrics, have been developed and successfully employed to guide and strategically plan R&D processes, allocate resources, and calibrate expectations. Similarly, one might hypothesize that academic research might also benefit from similar tools that would assist both researchers and funding organizations. A research assessment tool should: 1) facilitate planning and communication; 2) effectively gauge progress; and 3) accommodate and capture the diverse scope of academic research. However, the inherent open-endedness and exploratory nature of research makes it difficult to concretely characterize research progress. This paper begins to develop an academic research measurement tool. The proposed Research Maturity Levels (RMLs) tool divides research activities into four main components: 1) background knowledge, 2) problem and question formulation, 3) procedures and results, and 4) resources. Within each component, the RML steps researchers through a process of increasing maturity levels. Additionally, each component includes mechanisms to formalize iterations and “eureka” moments—when directions and plans may change based upon new knowledge. Preliminary evaluation suggests that the tool has promise as a comprehensive measurement tool. It is hoped this work will result in a tool that can facilitate planning, help to measure and communicate research progress, and encompass the diverse scope of academic research goals. INTRODUCTION The importance of process in technology and product development has been widely recognized in industry, academia, and education. In product development, a systematic process guides the abilities of designers, allows for objective evaluation of results, helps to manage complex projects, facilitates teamwork, and provides a foundation for educating students [1, 2]. To track and measure the progression of new products and technologies, industry professionals have developed tools like Technology Readiness Levels (TRLs). These widely adopted tools help to guide and strategically plan R&D processes, allocate resources, and calibrate expectations between industry-based researchers and managers [3-6]. However, these needs are not unique to industry—the objectives of strategic planning, resource allocation, and communicating expectations are just as relevant in academic environments [7]. While many books have
  • 2. 2 Copyright © 2011 by ASME been written to guide budding researchers through the research process [8-11], these guides fall short on delivering the planning and communication benefits that TRLs provide to product development initiatives. However, efforts by the authors to apply TRLs to academic research illustrated that they are not well-suited for academic research. TRLs succeed when there is a tangible, well-defined product or technology objective. The goals in research are quite different and are often open-ended. As such, this paper begins to develop a new tool to systematically characterize academic research and research progress, with the goal of providing feedback to researchers and funding organizations. Like a TRL assessment, the framework guides researchers through a process of increasing maturity, but the structure enables researchers to use their own research methodologies and practices to achieve a diverse set of self-defined goals. The work draws inspiration from Technology Readiness Levels which are proven to aid in strategic planning, communication, and gauging progress in product development but do not suit the nature of academic research. The authors also draw upon research productivity metrics that have been applied in academic environments, but are not well suited for strategic planning and facilitating progress as they do not provide or suggest pathways for research progression. With the researcher as the primary user, our hope is that this work will lay a foundation for a comprehensive tool that can accomplish all these goals — facilitating planning and communication, accommodating and capturing the diverse scope of academic research, and effectively gauging progress. DISCUSSION OF PRIOR WORK Technology Readiness Levels Technology Readiness Levels (TRLs) are a numerical scale of technology maturity developed by NASA in the 1980’s. They formalize 10 (0-9) steps of technology maturation, defining a map ending with readiness for deployment in operational conditions [12]. Each successive level requires that the technology is functional in an increasingly realistic environment [13]. The General Accounting Office has endorsed the use of TRLs in military research and the Department of Energy and the Department of Defense have also adopted TRLs [14]. Although originally written for spacecraft development, TRLs have now been appropriately re-worded to be used in industries ranging from software to biomedical products [14, 15]. Other industry organizations similarly tailor TRLs for their area of technological expertise [16]. Graettinger [15] and Tillack [17] also illustrate how TRLs can be used and implemented in technologies that contain multiple systems that interface with one another. While TRLs are commonly used as a method of communicating the state of a technology as it is being transferred from research units to business units for further development [15]. The success of Technology Readiness Levels in various contexts demonstrates how an appropriate instrument can be designed for planning, communication, and gauging progress. By acting as a common language that managers and researchers both understand, the maturity of a technology can be assessed by non-technical decision-makers [12, 18]. They, in turn, can effectively allocate resources, develop business propositions, or strategically plan product launches [18]. However, when applied to academic research projects, TRLs fall short in encompassing the broad and diverse scope of research. TRLs, with their linear structure and wording gears towards technology development, are geared for tangible product development — a TRL of 9 is defined as “actual system proven through successful mission operations” [14]. While research does sometimes result in commercial products or deployed technology systems, most research is concerned with the development of new knowledge [3]. This makes the TRL scale inappropriate for research as it is often unclear what the final “end product” will be. Additionally, the “end products” of research, once established, vary widely from papers to new procedures, algorithms, or measurement tools [3], most of which correspond to the lowest, or lowest few, levels of TRL maturity. TRLs lack sufficient resolution for early stage academic research. TRLs’ singular focus on assessing progress on the end deliverable is another aspect that limits its usefulness for research. Issues such as leadership and team formulation, or building expertise and underlying knowledge, etc. are not considered. However, prior art suggests that such factors are important to successful research and are forms of progress. Acknowledging these limitations, other researchers have proposed alternatives to, and modifications, of TRLs. Some examples include: Tang and Otto’s [19] proposal of a multi-enterprise readiness scale; Valerdi and Kohl’s [20] approach to risk management; Heslop et al.’s [21] cloverleaf modification to technology transfer; Fomin et al.’s [22] incorporation of Quality Functional Deployment with TRLs; and Hick’s modification of TRLs for product development [23]. Academic Research Productivity Literature on research productivity is focused on determining the factors that make a productive researcher and research environment. Data are collected through surveys and correlated with measures of productivity, such as publications and citations [24, 25]. In a literature review, Bland [26] identifies 12 characteristics of a research-conducive environment. Other studies have linked mentorship [27], collaboration [28], funding [29-32], organizational climate [33], group size [34], participation in professional organizations [35, 36] with productivity. This body of literature alludes to the importance of a researcher's environment in order to conduct research effectively. Where TRLs fall short in capturing enablers within an academic research environment, literature on research productivity provides useful insights in this aspect. These studies begin to form a picture of what a supportive, productive research environment entails and might even be useful in predicting success [37]. However, they do not assist researchers in conveying the state of their current research project and do not help with planning and communicating research progression
  • 3. 3 Copyright © 2011 by ASME Industrial Research and Development Evaluation Metrics Research and development evaluation methods use quantitative and semi-quantitative measures to communicate to managers the impact and contributions of R&D departments [38]. Extensive literature has been written on how to develop effective evaluation methods using surveys, bibliometrics, peer reviews, etc. [39-43]. Evaluations are conducted for a number of reasons, namely to motive researchers, monitor the progress of research, evaluate profitability, aid coordination and communication, and stimulate organizational learning [44]. Empirical studies have found some success in using research evaluations [44, 45], but not without challenges such as difficulty in choosing appropriate metrics, acceptance of performance measures, and time lags between evaluation and tangible impacts, etc. [46, 47]. See work by Geisler [48], Marjanovic [49], Hauser [50], and Kerssen-van Drongelan[46] for literature reviews. While insightful and useful in industrial contexts, the area of research evaluation metrics is not directly applicable to this paper. The results and correlations provide a snapshot of a researcher programs historical performance, rather than providing a pathway of progression for his or her research. This work is focused on developing a tool that can guide researchers and provide them with a tool for communication and planning, rather than quantifying success. This review of the literature highlights some of the strengths of industrial methods for measuring progress. However, none of the methods discussed fulfill the requirements of a comprehensive measurement tool for academic research: facilitate planning and communication, effectively gauge progress, and encompass the dimensions of a research program. Table 1 summarizes the methods’ strengths and weaknesses. TABLE 1: COMPARISON OF STRENGTHS AND WEAKNESSES OF TRLS AND RESEARCH PRODUCTIVITY AND EVALUATION METRICS TRLs Research Productivity and Evaluation metrics Planning/ Communication Strong Weak Gauging Progress Strong Weak Encompassing dimensions of a successful research program Weak Strong RESEARCH PROGRESSION LEVELS: A FIRST ATTEMPT A research-oriented permutation of TRLs, which was named Research Progression Levels (RPLs), was first proposed. Table 2 compares Technology Readiness Levels and the Research Progression Levels. We asked eight mechanical engineering faculty members to use the TRLs and RPLs to map one their research active projects. Although the Research Progression Levels were more appropriate than Technology Readiness Levels for academic research, it was still confusing to researchers. It became clear that neither scale was successful in accomplishing the goals of this research. TABLE 2: COMPARISON OF TRLS AND RPLS, AN INITIAL, HIGHLY PARALLEL TOOL USED TO PINPOINT DEFICIENCIES IN APPROACH Technology Readiness Levels (citation TRA) Research Progression Levels 1 Basic principles observed and reported Expert knowledge of prior art, its meaning and value 2 Technology concept and/or application formulated Key research questions are selected and justified 3 Analytical and experimental critical function and/or characteristic proof of concept Research strategy and conceptual structure of the REP are fully formulated 4 Component and/or breadboard validation in a laboratory environment Initial test of research strategy complete. Results have been interpreted and documented 5 Component and/or breadboard validation in a relevant environment Research procedures tested under representative conditions 6 System/subsystem model or prototype demonstration in a relevant environment Complete detailed research procedures are defined. 7 System prototype demonstration in an operational environment Using the research procedures, results have been systematically collected under representative conditions. 8 Actual system completed and qualified through test and demonstration Research procedures have been completed under the agreed robust regime 9 Actual system proven through successful mission operations. Results have been interpreted 10 REP is complete and delivered The academic researchers that were interviewed voiced several key concerns: • The numerical scaling of the TRLs and RPLs gave a sense of a grade — faculty felt hesitant about being “scored”, despite efforts to emphasize that the objective of the tool was to provide guidance and structure for moving research forward. • Faculty felt strongly that the process needed to recognize that the objective of each project was distinct. • When communicating with funding organizations, collaborators, etc., an important first step in a project is to
  • 4. FIGURE 1: VISUALIZATION OF RESEARCH PROJECT PROFILE 4 Copyright © 2011 by ASME reach an agreement about the goal of the research. This did not appear in the proposed RPL. • Iterative processes and serendipitous changes in direction are critical and vital to research but not accounted for in the tool. • Research contains multiple components, and progress is often made at different rates within each component, unlike the linear structure of RPL tool. In addition to this overall feedback, we observed that the Research Progression Levels were confusing and difficult for researchers to use, despite multiple revisions based on feedback. The interviews revealed that the RPLs had a fundamental flaw: they treated both “things researchers do” (e.g., literature search, question selection) and progress on these aspects as part of the progression scale. Researchers might tackle different aspects of research in a different order or in parallel, rather than a set linear order as the RPLs assume. This led to the idea of identifying key research components that can then be independently assessed for progression. RESEARCH MATURITY LEVELS: A SECOND ITERATION The second iteration grew out of the idea that key research components can be assessed independently along unique scales of maturity, rather than progress. Progress implies a journey towards a known, final end state. A research project’s end state is often unknown and ever changing, so instead the maturity, or level of refinement and focus, is tracked. The academic research measurement tool developed based upon this insight, named Research Maturity Levels (RMLs), takes the form as a worksheet. From a completed worksheet, the researcher is able to describe a profile of their project in terms of its maturity on a number of axes. Multiple axes of progress The one-dimensional, final deliverable-focused TRL and initial RPL scales measured a linear progression towards a final technology or goal. However, even within product development, it is well understood that technological progress is not the only area important to a successful technology development program [23]. Drawing upon research process literature, the characterization of research was first divided into separate, distinct axes or components: background knowledge, problem and question formulation, procedures and results, and resources. This way maturity or progress on each axis or component can be assessed independently. The four research components may progress in parallel or sequentially, and may advance at different rates depending on preliminary findings or stage of research, etc. Thus, each research component is given its own maturity scale appropriate for the given component. Graphically, these research components could be visualized as progress bars that plot the maturity of each component, depicting a profile of the project, as shown in Figure 1. The maturity levels within a component can be more clearly illustrated through an example of the problem and question formulation component (see Figure 2). The first maturity level, Q1, requires only a “broad statement of the problem”. From that broad statement, Q2 requires a “list of preliminary research questions.” Q3 takes the preliminary questions and narrows them down to a more focused “statement of research questions.” The research questions are then parsed into a “structured statement of problem for investigation” (Q4), which finally result in a “testable, concrete problem statement” (Q5). The other components guide the researcher through a similar progression. The background knowledge component sets a framework for a well-organized, thorough review of the literature. The procedures and results component ensures meaningful results by requiring an increasing level of fidelity from the experimental procedures. Finally, the resource component helps researchers build a productive environment in terms of physical, intellectual, human, monetary, and management resources. Maturity levels defined by milestones In the wording of TRLs, each level requires that a technology is able to achieve a specific deliverable. For example, in order to achieve a TRL of 6, a “ system/subsystem model or prototype demonstration in a relevant environment” is required [14]. Similarly, each maturity level of the RMLs is comprised of a series of characteristic research milestones. Using a scale, researchers judge the completeness of each milestone for their project. The milestones and their progress then act as a guide to determine the project’s current maturity level for the given component. For example, figure 3 shows the milestones used for the problem and question formulation component. The first maturity level (Q1) states that a “broad statement of [the] problem” has been established. In order to gauge progress on this milestone, the RML is used to assess whether the researchers have a “thorough understanding of literature and engineering context”, that they have “identified
  • 5. 5 Copyright © 2011 by ASME challenges” and “problems appropriate for research”, and that they have articulated the “impact and importance of question in [a] broader context.” Maturity Level Q1 Broad statement of problem Q2 List of preliminary research questions Q3 Statement of research questions Q4 Structured statement of problem for investigation Q5 Testable, concrete problem statement Iteration accounting An important feature of Research Maturity Levels is iteration accounting—capturing the notion that trying multiple approaches exploring problem, or just doing things over, is an essential part of learning and does represent significant research maturation. In traditional TRLs, if a project makes three different attempts or exploration approaches for a breadboard experiment, the project is evaluated as a TRL 4, whether it is on the first or final experiment. The iterations are not documented as progress, even though new breakthroughs in understanding might have been made. Unlike TRLs, the RMLs attempt to document and encompass these revisions so that progress made through iteration is recognized. These iterations may occur because of new findings, planned progressions, or unforeseen complications. At the beginning of each component, there is a space to record the current iteration, number of anticipated iterations, and the reason for the current iteration. See Figure 3 for an example of iteration accounting in the context of a the problem and question formulation component. Flexible end goals Technology Readiness Levels are very specific in their end product — an “actual system proven through successful mission operations”. However, in research the “end product” usually is not as concrete and well defined as a functioning technology; rather, the ultimate goal of research is to uncover and develop new knowledge. This knowledge can take the form of a journal article, patent, new procedure, etc. In order to accommodate the diverse range of research endeavors, the RMLs progress towards a mastery of the research components, culminating in a formal, peer-reviewed dissemination of research findings as the final maturity level of the Procedures and Results component. Development of a productive research environment The literature recognizes the importance of a strong research environment in order for researchers to be productive. To assist researchers in building a productive environment around them, the resource component of the RMLs acts as a checklist for necessary monetary, intellectual, human, physical, and managerial resources. This component is slightly different from the other components: first, it is further decomposed in to sub-components resource types and second, it is structured as less of a progression and more of a checklist. See Figure 6 for an example of the research support environment sub-component. Verification of RMLs’ efficacy The authors conducted five preliminary studies, guiding mechanical engineering faculty and graduate students through the tool to ensure that the prompts are properly interpreted and the form was understood. These usability studies verified that the structure and format of the Research Maturity Levels were successful in creating a profile of a variety of research projects. Initial tests also suggest that if this tool is used periodically, progression in a research project can be tracked, whether progress was made through new iterations or higher maturity levels within an iteration. The authors plan to continue testing the tool to improve its usability and to understand the functional requirements from researchers. Our preliminary tests with faculty also uncovered interesting observations of how this tool may be used. First, we observed a difference in the way a scientists and engineers complete the tool. More science-minded faculty tended to shy away from classifying open-ended milestones, such as a “thorough understanding of literature and engineering context”, as “complete”, noting that there are always new things to discover and learn, despite their best efforts to understand the space. Faculty with engineering mindsets felt more confident classifying a milestone as “complete” when a clear solution had emerged from their work. Another observation was that faculty did not always seem familiar with the idea of using a structured research process. For example, some faculty had trouble distinguishing “testing of core research procedures in approximate conditions” from “applying the full research procedure”. In product design and technology development, the practice of low-level models to verify core concepts is well-known and widely used. Bringing this concept to research may improve the research process and make it more efficient. FIGURE 2: EXAMPLE OF MATURITY LEVELS FROM THE PROBLEM AND QUESTION FORMULATION COMPONENT
  • 6. Problem and Question Formulation This component begins with the statement of a problem of interest. It then progresses through the development of research questions, finally concluding with a testable well-defined FIGURE 3: FULL EXAMPLE OF PROBLEM AND QUESTION FORMULATION COMPONENT 6 Copyright © 2011 by ASME problem statement or hypothesis with clear research expectations and scope. Current Iteration: 1 2 3 4 5 6 7 8 9 10 Final anticipated iteration: 1 2 3 4 5 6 7 8 9 10 Don’t know Explanation for current iteration: No previous iteration Pre-planned logical progression New insights obtained in prior iteration Approach of prior iteration did not work out as hoped Other reasons Please respond to the following prompts considering only the current iteration. Once you have completed this section, put a check beside the maturity level that best describes the current state of research. Maturity Level Milestones Notes Q1 Broad statement of problem Thorough understanding of literature and engineering context Identified challenges Identified problems appropriate for research Overall Impact and importance of question in broader context articulated Q2 List of preliminary research questions Key characteristics of problem are articulated List of many research questions related to problem Overall Q3 Statement of research questions Focused study related to preliminary questions Prioritization of promising research questions Short list of critical research questions formed Overall Most critical question(s) chosen Q4 Structured statement of problem for investigation Formulate research hypothesis/ problem requirements Hypothesis/ problem divided in to manageable subproblems Feasibility of addressing hypothesis/ problem determined Overall Q5 Testable, concrete problem statement Variables (dependent, independent, control, intervening) identified Focused research expectations and scope defined Operational definitions and assumptions stated Overall Problem and Question Formulation maturity level: Q1: Broad statement of problem Q2: List of preliminary research questions Q3: Statement of research questions Q4: Structured statement of problem for investigation Q5: Testable, concrete problem statement Not Applicable Not started Less than half About half completed More than half Almost completed/ Completed
  • 7. Research Support Environment This resource is not relevant This sub-category refers to the nurturing of connections with relevant professionals to build the emotional and intellectual support environment necessary for research. Not Applicable Not started Less than half About half completed More than half Almost completed/ Completed Maturity Level Notes Advocacy of supervisors and/or administration received Support hiring students and staff in place Support streamlining purchasing in place Support building and maintaining lab in place Support to obtain money in place Support to operate projects in flexible, timely manner in place FIGURE 4: EXMAPLE OF THE RESEARCH SUPPORT ENVIRONMENT SUB-COMPONENT OF THE RESOURCES COMPONENT 7 Copyright © 2011 by ASME FUTURE WORK This work describes steps toward developing a comprehensive tool for characterizing the progression of academic research. The proposed Research Maturity Levels tool seeks to fulfill the requirements of: facilitating planning and communication; accommodating and capturing the diverse scope of academic research; and effectively gauging progress. Initial tests suggest that the proposed RMLs successfully accomplish these goals and that further improvements may increase it efficacy. Through usability case studies, the structure and wording of the RMLs will be refined until it can be widely distributed for use by a ranger of researchers. This final form will be deployed as online software. Since our primary focus has been to develop a comprehensive tool for researchers, we have also been exploring ways to provide researchers with more feedback than just the profile of their research. Inherent to all research is risk. Many researchers intuitively conduct risk-value assessments while choosing projects or new directions. We hope that from the project profile developed by the Research Maturity Levels, we will be able to provide researchers with a formal risk profile of their project, suggesting ways to manage risks, set priorities and intermediate term goals, and to make decisions. ACKNOWLEDGMENTS The authors would like to thank the King Fahd University of Petroleum Minerals (KFUPM) in Dhahran, Saudi Arabia, for funding the research reported in this paper through the Center for Clean Water and Clean Energy at MIT and KFUPM. REFERENCES 1. Ulrich, K.T., Product design and development / Karl T. Ulrich, Steven D. Eppinger. 3rd ed. ed. 2. Pahl, G., K. Wallace, and L.T.M. Blessing, Engineering design : a systematic approach. 3rd ed. 2007, London: Springer. xxi, 617 p. 3. Garrett-Jones, S. and D. Aylward, Some recent developments in the evaluation of university research outcomes in the United Kingdom. Research Evaluation, 2000. 9(1): p. 69-75. 4. Kirby, M., D. Mavris, and M. Largent, A process for tracking and assessing emerging technology development programs for resource allocation, in in 1st AIAA ATIO Forum. 2001. 5. Kenley, C. and T. Creque, Predicting technology operational availability using technical maturity assessment, in Systems Engineering. 1999. p. 198-211. 6. Godener, A. and K. Soderquist, Use and impact of performance measurement results in RD and NPD: an exploratory study, in R D Management. 2004. p. 191-219. 7. Brew, A., The nature of research : inquiry in academic contexts. 2001, London ; New York: Routledge/Falmer. 205 p. 8. Bock, P., Getting it right : R D methods for science and engineering. 2001, San Diego, Calif. London: Academic. xviii, 350 p. 9. Leedy, P.D. and J.E. Ormrod, Practical research : planning and design. 9th ed. 2010, Upper Saddle River, NJ: Merrill. xvii, 336 p. Overall Intellectual research environment established Relevant network of mentors and advisors identified Appropriate academic collaborators identified and contacted Relationships with relevant industry professionals established Overall Comments: please use the space below to explain the current state of any of the research support environment.
  • 8. 8 Copyright © 2011 by ASME 10. Blaxter, L., C. Hughes, and M. Tight, How to research. 3rd ed. 2006, Maidenhead ; New York: Open University Press. xii, 287 p. 11. Kumar, R., Research methodology : a step-by-step guide for beginners. 2nd ed. 2005, London: SAGE. xix, 332 p. 12. Mankins, J.C., Technology readiness levels, in White Paper, April. 1995. 13. Mackey, R. and R. Some…, Readiness levels for spacecraft information technologies, in Aerospace Conference. 2005. 14. OF THE DIRECTOR OF DEFENSE …, O., Technology Readiness Assessment (TRA) Deskbook, in oai.dtic.mil. 2009. 15. Graettinger, C.P., et al., Using the Technology Readiness Levels Scale to Support Technology Management in the DOD's ATD/STO Environments. 2002. 16. Cornford, S.L. and L. Sarsfield, Quantitative methods for maturing and infusing advanced spacecraft technology, in Aerospace Conference, 2004. Proceedings. 2004 IEEE. 2004. 17. Tillack, M., et al., An evaluation of fusion energy RD gaps using technology readiness levels, in FUSION SCIENCE AND TECHNOLOGY. 2009. p. 949. 18. Dubos, G.F., J.H. Saleh, and R. Braun. Technology readiness level, schedule risk and slippage in spacecraft design: Data analysis and modeling. 2007. 19. Tang, V. and K.N. Otto, Multifunctional Enterprise Readiness: Beyond the Policy of Build-Test-Fix Cyclic Rework. 2009. 20. Valerdi, R. and R.J. Kohl, An approach to technology risk management, in Engineering Systems Division Symposium. 2004. 21. Heslop, L., E. McGregor, and M. Griffith, Development of a Technology Readiness Assessment Measure: The Cloverleaf Model of Technology Transfer, in The Journal of Technology Transfer. 2001. p. 369-384. 22. Fomin, P., T. Mazzuchi, and S. Sarkani, Incorporating Maturity Assessment into Quality Functional Deployment for Improved Decision Support Analysis, Risk Management, and Defense Acquisition, in Proceedings of the World Congress on Engineering and Computer Science. 2009. 23. Hicks, B., Larsson, A., Culley, S., Larsson, T. A methodology for evaluating technology readiness during product development. in International Conference on Engineering Design, ICED '09. 2009. Stanford, CA. 24. Dundar, H. and D. Lewis, Determinants of research productivity in higher education, in Research in Higher Education. 1998. p. 607-631. 25. Toutkoushian, R.K., et al., Using publications counts to measure an institution's research productivity, in Research in Higher Education. 2003. p. 121-148. 26. Bland, C. and M. Ruffin, CHARACTERISTICS OF A PRODUCTIVE RESEARCH ENVIRONMENT - LITERATURE-REVIEW, in Academic Medicine. 1992. p. 385-397. 27. Steiner, J.F., et al., Assessing the role of influential mentors in the research development of primary care fellows, in Academic Medicine. 2004. p. 865. 28. Landry, R., N. Traore, and B. Godin, An econometric analysis of the effect of collaboration on academic research productivity, in Higher Education. 1996. p. 283-301. 29. Geuna, A. and B.R. Martin, University research evaluation and funding: an international comparison, in Minerva. 2003. p. 277-304. 30. Lindsey, D., THE RELATIONSHIP BETWEEN PERFORMANCE INDICATORS FOR ACADEMIC RESEARCH AND FUNDING - DEVELOPING A MEASURE OF RETURN ON INVESTMENT IN SCIENCE, in Scientometrics. 1991. p. 221-234. 31. Guellec, D. and B. de la Potterie, From RD to productivity growth: Do the institutional settings and the source of funds of RD matter?, in Oxford Bulletin of Economics and Statistics. 2004. p. 353-378. 32. Gulbrandsen, M. and J. Smeby, Industry funding and university professors' research performance, in Research Policy. 2005. p. 932-950. 33. Ramsden, P., DESCRIBING AND EXPLAINING RESEARCH PRODUCTIVITY, in Higher Education. 1994. p. 207-226. 34. Louis, K., et al., Becoming a scientist: The effects of work-group size and organizational climate, in Journal of Higher Education. 2007. p. 311-+. 35. Hitchcock, M., et al., Professional networks: The influence of colleagues on the academic success of faculty, in Academic Medicine. 1995. p. 1108-1116. 36. Ynalvez, M. and W. Shrum, Professional networks, scientific collaboration, and publication productivity in resource-constrained research institutions in a developing country, in Research Policy. 2011. p. 204- 216. 37. Hekelman, F., S. Zyzanski, and S. Flocke, SUCCESSFUL AND LESS-SUCCESSFUL RESEARCH PERFORMANCE OF JUNIOR FACULTY, in Research in Higher Education. 1995. p. 235-255. 38. Werner, B. and W. Souder, Measuring RD performance - State of the art, in Research- Technology Management. 1997. p. 34-42. 39. Poh, K., B. Ang, and F. Bai, A comparative analysis of RD project evaluation methods, in R D Management. 2001. p. 63-75. 40. Coccia, M., A basic model for evaluating RD performance: theory and application in Italy, in R D Management. 2001. p. 453-464. 41. Ojanen, V. and O. Vuola, Coping with the multiple dimensions of RD performance analysis, in International Journal of Technology Management. 2006. p. 279-290.
  • 9. 9 Copyright © 2011 by ASME 42. Butler, L., Using a balanced approach to bibliometrics: quantitative performance measures in the Australian Research Quality Framework, in Ethics in Science and Environmental Politics(ESEP). 2008. p. 83-92. 43. Garcia-Aracil, A., A. Gracia, and M. Perez-Marin, Analysis of the evaluation process of the research performance: An empirical case, in Scientometrics. 2006. p. 213-230. 44. Chiesa, V., et al., Designing a performance measurement system for the research activities: A reference framework and an empirical study, in Journal of Engineering and Technology Management. 2008. p. 213-226. 45. Kim, B. and H. Oh, An effective RD performance measurement system: survey of Korean RD researchers, in Omega-International Journal of Management Science. 2002. p. 19-31. 46. Drongelen, I. and A. Cooke, Design principles for the development of measurement systems for research and development processes, in RD Management. 1997. p. 345-357. 47. Garcia-Valderrama, T. and E. Mulero-Mendigorri, Content validation of a measure of RD effectiveness, in R D Management. 2005. p. 311-331. 48. Geisler, E., The metrics of technology evaluation: where we stand and where we should go from here, in International Journal of Technology Management. 2002. p. 341-374. 49. Marjanovic, S.H., Stephen; Wooding, Steven, A Historical Reflection on Research Evaluation Studies, Their Recurrent Themes and Challenges. Technical Report. 2009. p. 55. 50. Hauser, J., Research, development, and engineering metrics, in Management Science. 1998. p. 1670-1689.