J. Computer Sci., 3 (3): 144-148, 2007
Test accomplishments: Now as it introduced earlier, it
shows the result of test accomplishments about total 7
tests for TC by using TEST. As it explained earlier, TC
is divided by two types. In MTC, however, functions of
SW are simple and the outbreak of defect was not re-
ported, so we ruled out all test results which is from
testing MTC. Accordingly, only 7 results of ATC are
Fig. 1: Hardware configuration of TC introduced next. Fig. 3 shows the primitive data of test
result. One software defect is classified by some crite-
rions to find its properties.
Product Modeling Mode l Verification
(Microsoft Exce l) (Microsoft Acce ss DB)
Test Report Ge ne ration Test Case Ge neration
Fig. 2: Snapshots for test phase using TEST
Test automation tool: The main reason to use the
automation tool for testing is to overcome the demerits Fig. 3: Primitive data for analyzing test results
of the existing manual test. The time cost can be re-
duced for framing Defect Record, documentation and 12
writing report. The exactness of test result can be also
guaranteed in previous studies which compare the man-
ual test and the automation test, the manual test requires
more than two times of expense and time as well.
Testing software's maximizing of the automation 2
on every procedure on testing can be a standard to 0
Product M odeling Test Case Generation Test Execution Reviewing and Adjusting Test Report Generation
ability of tool. How much the developed testware sup- Test Phase
ports to test procedure like test script generation, test Min Time Max Time Average
execution and test report generation, there would be Fig. 4: Time cost for test phases
ability as an automation tool. A study for the realization
of the automation tool like test case's operation of Time cost for testing: As the result in Fig. 4, it takes
automation or automated comparison between actual from 2 weeks to 1 month to test a unit system. For
result and expected result has been in progress. Also, a testing a unit system, the number of accomplished test
study of organization of test report template for embed- case, there can be small differences of number of the
ded software has begun[9-11]. test cases, is about 50,000.
TEST (Testing-stand for Embedded Software Test) In each phase, it was realized that the time for
includes 5 functions for verification of embedded soft- executing test cases, reviewing and adjusting the found
ware. 'product modeling' which reflects FSM and out defect, takes too long. During test process, the time
product specification on Microsoft Excel, 'model verifi- for carrying out of the test case takes more than 35% of
the whole process. In ‘reviewing and adjusting defect’
cation' which verifies exactness of input by user[13-15], phase which is directly related to this, during executing
'Test script generation' which creates test cases auto- test cases, it starts from where the defect happens, so if
matically based on product specification, 'test execution a software defect is found out, the test period should be
and comparison' which executes test cases and 'test longer. In consequence, a study that test case can be
report generation' which reports test result that is used operated in preferred order to find out such software
as data of defect verification[16,17]. TEST embodied in defect quickly during the test process, has been going
Microsoft Visual Basic.Net, VBA Macro and many on.
DLL based on C.
J. Computer Sci., 3 (3): 144-148, 2007
if ( iVar >= 255) ----------------- exact code
if ( iVar > 255) ----------------- defect code
Fig. 5: Example of programming defect
The Num. of Fault
A B C D E F G
Fig. 8: Residual plots for software fault
Programming Error Specification Misunderstanding Specification Error Total Hardware configuration and software defect: Fig. 7
Fig. 6: Classifications and test results of the software shows number of software defect about number of
defects actuator or sensor connected in one unit TC. Through
the 7 each of system it can get the result that as much as
Software defect classification: As the result using by complicated organization of hardware has given, more
test automation tool, 1-25 software defects were found and more software defect can be lied. When the number
out in each unit system. Fig. 5 described the found out of organization factor of hardware which are connected
defect in Programming Defect, Specification Misunder- in system, are getting more, it means there are many
standing and Specification Defect. things to control inside of embedded software, also it
Programming Defect is traditional software defect
and means that it happened by programmer's defect and can cause compilation by software's increasing. These
it takes 74% of found out defect. For the example of result means when embedded software is measured, the
this defect can be like using 'bigger' instead of 'bigger complexity of hardware configuration can be used as a
or equal'. Fig. 3 indicated the example of the Program- factor, like using lines of program was for existing
ming Defect. traditional software's complexity.
Specification Misunderstanding means that a de- We can predict the number of software defect
veloper misunderstands the given specification and
defect happens. A developer analyzes by his or her own, which are in the given software using these results. Fig.
however, between multitudinous and complicated lo- 7 shows that NSoftwareFault (the number of software fault)
gics are mentioned on specification, so defect happens. would be increased linearly as NEndItem (the number of
These kinds of problems are almost 14% of the whole. End Item) have been increased. This fact could be used
Specification Defect is when a specification of to decide the time to stop software testing. Fig. 7 and 8
product development includes defect by a developer. In were drawn by MINITAB based on real test result. We
this case, test automation tool reported there was a
defect on SUT, but, in fact, there was a problem in did a polynomial regression analysis between NSoftware-
specification that used by test automation tool and it Fault and NEndItem. We can find out the regression equa-
takes 12% of the whole. tion like Equation 1. (S=3.23859, R-Sq=90.2%, R-
Sq(adj) = 85.3%, FRegression = 18.38 and PRegression =
Y = 0.0745X2 - 2.7535X + 27.531
Equation 1 Regression Equation between NSoftwareFault
Output variable and software defect: Fig. 9 is the
result of mapping with software defect and output vari-
able. Those test objects, 7 TC have similar hardware
organization and depending on system, additional input
Fig. 7: Distribution of software defect
J. Computer Sci., 3 (3): 144-148, 2007
Therefore from black-box test which verifies software These results could be used to order the test cases
through in/output, result of output defect is considered as a test case prioritization technique. For example, we
as software defect. As shown on the Fig. 9, specific should change the execution order between the test
output defect is centralized. In the Fig., output: O17, cases to find out the software defect more quickly. As
O 52 O 53
we mentioned earlier, the review phase require much
O 51 O4
O6 time to decide whether test result is the software defect
O 48 O7
or not. For that reason, recently various test case priori-
O 45 6 O 10 tization techniques are researched.
O 44 O 11
The Num. of Software Faul
O 43 O 12
O 42 2 O 13
O 41 O 14 12
O 40 O 15
O 39 O 16
O 38 O 17 8
O 37 O 18
O 36 O 19
O 35 O 20
O 34 O 21
O 32 O 23
O 31 O 24
O 30 O 25
O 29 O 28 O 27 O 26
1st 2nd 3rd 4th
Fig. 9: Distribution of software defect based on output System Under Test
Fig. 11: Software defects is on a decreasing trend
M 57 M 58
Test case prioritization techniques have been
M 56 16 M3
M 55 M5
shown to improve regression-testing activities by in-
creasing the rate of defect detection, thus allowing
M 49 8 M 11
testers to fix defects earlier. Hema Scrikanth et al. de-
M 13 veloped a prioritization scheme with two main goals:
identifying the severe defects earlier and minimizing
M 16 the cost of test case prioritization. They proposed the
M 43 M 17
prioritization scheme based on customer-assigned pri-
ority and requirement complexity. This scheme has a
M 42 M 18
M 41 M 19
restriction that prioritization is done by expert although
M 37 M 23
it is impossible to collect talented expert in a small or
M 34 M 26
M 32 M 31
M 29 M 28
Sebastian Elbaum et al. orders the test cases based
on the number of function calls and proposes the
Fig. 10: Distribution of software defect based on inter- scheme that modification parts of the software have
nal modules higher priority than others when testers want to test the
software in the passion of the regression testing. This
O21 and O35, were assigned as that value through the method could be applied to the black-box testing be-
many different conditions and then more software de- cause its ideas are on the basis of the white-box testing.
fect was caused by comparing with other input. Fig. 10
is about distribution of internal logic. In Fig. 9, it takes Software defect reduction: Fig. 11 shows number of
a look at the relationship between defect and specific software defect of TC delivered by two same compa-
nies. They are different types of system, but basic or-
output. Similarly with this, in Fig. 10 it is about a rela-
ganization of HW/SW is similar. After development of
tionship between defect and specific internal logic.
test automation tool, 13 software defect were found out
These results can be used as prediction of confidence of
at initial applied model, however only 3 defect were
embedded software. It tells framing specific logic or discovered at fourth model. This shows as one com-
system related with a specific variable should be clear. pany which orders an embedded system applies test
Reorganizing complicated formation of specific logic in automation tool, a positive manner for quality verifica-
simple system is needed. tion has an effect on product developer, so operation
J. Computer Sci., 3 (3): 144-148, 2007
from development step to increasing of product's qual- 7. OVUM, 1999. Software Testing Tools, OVUM.
ity is progressing. Also it can be analyzed as applying 8. Sowers, M., 2002. Software Testing Tools Sum-
test automation tool, it can be guaranteed of verification mary, Software Development Technologies Inc.
ability and discussion about developer's product. White Paper.
9. Hayes, L., 1995. The Automated Testing Hand-
CONCLUSION AND FUTURE WORKS book. Software Testing Institute.
10. Fewster, M. and D. Graham, 1999. Software Test-
In this study, TC which is one of the embedded ing Automation: Effective Use of Test Execution
systems was introduced as the automation tool for Tools. ACM Press, Addison Wesley.
black-box test. Through the results of the several TC 11. Dustin, E., J. Rashka and J. Paul, 1999. Automated
test, additionally using test automation tool to embed- Software Testing. Addison Wesley.
ded software which has taken manual test already, so it 12. Baek, C., S. Park and K. Choi, 2005. TEST: An
can take apart more fallacies. Accordingly, it confirms Effective Automation Tool for Testing Embedded
this would work on positive side of increasing product's Software. WSEAS Trans. on Information Science
quality. In the future, in the future, as in a view of rate & Applications, Vol. 2.
between timing of test accomplishment and discovering 13. Kim, B., C. Baek, J. Jang, G. Jung, K. Choi and S.
defect, study which is for increasing of utility factor for Park, 2004. A design and implementation of the
test, formatting of superior quality of test case and check module for the test of embedded software.
organizing test case in ranking with new techniques, KISS Conference.
will be advance. In this part as it introduced in part 3, it 14. Kim, B., C. Baek, J. Jang, G. Jung, K. Choi and S.
shows the result of test accomplishment about total 7 Park, 2005. A design and implementation of virtual
tests of TC by using test automation tool. As it ex- environment operator for the embedded software
plained in part 2, TC is divided by two types. In MTC, test. KCC 2005.
however, function of SW is simple and the outbreak of 15. Hoffman, D., 2001. Using test oracles in automa-
defect was not reported, so it was excepted from the tion. Pacific Northwest Software Quality Confer-
test result. Accordingly, only 7 results of ATC are in- ence.
troduced in this part. 16. Jeon, H., C. Baek and S. Park, 2005. A test report
for the effective defect tracing of the embedded
REFERENCES software. Vol. 32, KICS Conference.
17. IEEE-SA, 1998. IEEE829: IEEE Standard for
1. Ireson, W.G., C.F. Coombs and R.Y. Moss, 1995. Software Test Documentation. IEEE-SA Standards
Handbook of Reliability Engineering and Man- Board.
agement. McGraw-Hill Professional. 18. Srikanth, H and L. Williams, Requirements-Based
2. Beizer, B., 1995. Black-Box Testing. John Willey Test Case Priotization.
& Sons. 19. Elbaum, S., G. Rothreml and S. Kanduri, 2004.
3. Beizer, B., 1990. Software Testing Techniques. Selecting a cost-effective test case prioritization
2nd Edn., International Thomson Computer Press. technique. Software Quality J., 12: 185-210.
4. Jang, Y., K. Yeo and H. Lee, 2003. A case study of
the manual testing and automated testing. KISS
5. Broekman, B. and E. Notenboom, 2003. Testing
Embedded Software. Addison Wesley.
6. Rodd, K., 1998. Practical Guide to Software Sys-
tem Testing. K.J. Ross & Associates Pty. Ltd.