This document summarizes Nicole Maguire's intern work automating tests for NASA's Space Launch System software. She automated JUnit tests for the Display Services and Framework subsystem using UISpec4J automation software. While it worked for most tests, some required a different solution. Nicole also helped a team automate testing using SikuliX image recognition. She wrote IMPERIO, a program that generates images of text to recognize since the actual text fonts differed on testing machines. IMPERIO solved the team's biggest challenge and allowed testing to continue successfully.
1. NASA NIFS – Intern Final Report – Nicole Maguire
Spaceport Command and Control System Software
Development
Nicole Maguire
NASA Kennedy Space Center, FL 32899
Mentor: Caylyne Shelton
NE-ES
2. NASA NIFS – Intern Final Report – Nicole Maguire
ABSTRACT
NASA’s Spaceport Command and Control System (SCCS) at Kennedy Space Center is a software
system designed to control and monitor the new Space Launch System, including Orion, and its launch until
takeoff. Under SCCS, the Launch Control Subsystem (LCS) operates in real time to allow for interaction
between users in the firing room and rockets on the launch pad. Because many of these applications are
meant for this user interaction, they contain a lot of Graphical User Interface (GUI) elements that allow for
abstraction between the user and the actual code. In order to fully verify and validate the software, all testing
of the code must also mimic user behavior with GUI elements. The current state of testing involves a manual
tester. They must sit on the actual computer that contains the dashboard and follow a spreadsheet of
instructions with input and intended outcome. This is a very redundant and time intensive process that can
also give way to human error in the testing process. The solution to this is automated testing software. The
two subsystems of LCS automating their testing that will be discussed in this paper are the Display Services
and Framework (DSF) Computer Software Configuration Item (CSCI) and the System Applications (SAPP)
(CSCI) dashboard.
3. NASA NIFS – Intern Final Report – Nicole Maguire
Automation of JUnit Tests for DSF
Nicole Maguire
James Madison University — 800 South Main St., Harrisonburg, VA, 22807
Nomenclature
AccuRev –centralized version control system
ClearQuest –workflow automation tool
CSCI – Computer Software Configuration Item
CSE – Console System Engineer who operates the sets in the firing room
DSF – Display Services & Framework
FEST – an open source context sensitive testing library for testing swing applications
GUI – Graphic User Interface
Java – Programming language that is strongly typed with many built in libraries
JUnit – a test package that verifies the behavior of written code, specific to the java language
NASA – National Aeronautics and Space Administration
Terminal – a character based non graphical user interface
UISpec4J – an open source testing library for testing swing applications
I. Introduction
The Display Services and Framework (DSF) CSCI allows for the Console System Engineer (CSE) to create
displays for the LCS using the Display Editor (DE). The program is written in Java and tested using JUnit test suites.
Many of the tests require action from the user to proceed. These actions range from having to click a button on a
pop-up, to inputting text, to selecting a specific file. Before I began the automation process, the tests had the option
to be executed manually or automatically. This automatic option didn’t actually automate anything, it simply
skipped over any tests that required action. This meant that not all code was being run or tested each time.
II. Training/Familiarizations
In order to begin on this project, I had to complete multiple training and familiarization programs. The first
was a general intern training by the education office which introduced us to the rules and protocols of NASA and the
Kennedy Space Center. The education office continued training throughout the project on different topics such as
innovation, intruder safety, and information export training. In order to gain access to AccuRev and ClearQuest,
source code configuration management and work tracking software tools, I attended in-person training which taught
us how to create a work order and promote code through AccuRev. I already knew about JUnit testing, but had to
teach myself how to use NetBeans, the Java Integrated Development Environment (IDE) that is standard for the
project. The most difficult training was for UISpec4J and FEST, which were the two pieces of automation software
used. There was little documentation and few online tutorials. To learn these, I had to read through the original
source code.
III. Approach
Once I had all of my training and preparation complete, I was put on the Display Services and Framework
project. I first had to make sure that all of the existing unit tests were correct and working. One of the common
problems was that the code had been updated without the needed updates to the test cases. The more difficult of the
problems that I encountered was that the SQLite database was locked/the button was not enabled. These errors
appeared interchangeably.
Errors displayed during malfunctioning JUnit tests
4. NASA NIFS – Intern Final Report – Nicole Maguire
If run independently, there were no issues with these errors. It was when the tests were run consecutively
that this became an issue. Because of this behavior, it was discovered that there was a timing error in the actual
code. To fix this in the tests, I added a loop that would check if accessing the database returned an error. If it did, the
program would sleep for a second and then check again. Originally, there were simply eight or nine second sleeps
that would be increased if the program continued to get locked up on one spot in the code. This loop allowed for an
extremely efficient and flexible sleep. Completely stripped of all code apart from the infrastructure of the loop, the
code looks as following:
Example loop for accessing the database
Once all of the test were fixed, I began to automate them using UISpec4J. It is an open source program that
allows for testing of GUI elements. I chose this program because my technical point of contact had some experience
with it on a few tests before I began. The biggest problem with this software is that no documentation or detailed
examples exist online. I had to read into the source code before I could understand enough about the program to start
using it on pop-up windows.
For reference, the following is what a very simple popup window created in Java looks like. It has a title, a
message to be displayed, and a button. All three of these can be customized strings inputted by the user. If this
popup were to be automated in the software, UISpec4J would grab this window titled “The Title of the PopUp” and
click on the “Button” button.
Example pop-up
5. NASA NIFS – Intern Final Report – Nicole Maguire
To setup the program, UISpec4J was linked in the class path using a snapshot of the jar file. It then had to
be initialized in the actual jUnit test suite. First, UISpec4J takes in a trigger, or the line of code that causes a pop-up
window to appear. Then the window interceptor catches the triggered window and executes any specified code on
the popup. To find the trigger code, I stepped through the code line by line on manual mode to see what popup was
occurring and the action needed to be performed. This would be encased in the block of code that runs procedurally.
Inside the process of the window handler I would code the required action on the button or text field located in the
popup. The following is an example of what UISpec4J looks for when running:
Framework for UISpec4J
Sometimes this was proven to be very difficult because the button was not named or could not be clicked.
One workaround to this was to wait for the button to appear, or simply to close the window when it popped up if
applicable. A lot of troubleshooting was required to finish the tests.
IV. Conclusion
UISpec4J worked for all but three of the test suites that needed to be automated. The problem is that certain
toolkits cannot be cast to the UISpec4J software. While the issue of casting the toolkit had been brought to the
developers back in 2011, there have been no updates to the software. Overall, around forty tests were automated
which makes the project successful. These code changes will be promoted into AccuRev and, if they pass code
review, will become a part of the DSF system. The next step going forward will be to looking into using FEST,
another automation tool, to try and fill in where UISpec4J is falling short. The remaining time in my internship will
be spent trying to automate the remaining tests.
V. Acknowledgements
I would like to thank Jason Kapusta for introducing me to the project. I thank Thong Tran, my DSF technical
point of contact, for taking the time to walk me through DSF and helping me research problems that came up.
Lastly, I would like to thank Caylyne Shelton and Jamie Szafran, my mentors, for their guidance and support
through my first internship.
6. NASA NIFS – Intern Final Report – Nicole Maguire
Software Automation using SikuliX
Nicole Maguire
James Madison University — 800 South Main St., Harrisonburg, VA, 22807
Nomenclature
AccuRev –centralized version control system
ClearQuest –workflow automation tool
DA – Development Activity
GUI – Graphical User Interface
IMPERIO – Image Png Emulation/Replication for Intended Output
Java – Programming language that is strongly typed with many built in libraries
Jython – A programming language that combines java and python
NASA – National Aeronautics and Space Administration
NE-ES – NASA Engineering and Excellence in Software
PNG – Portable Networks Graphic
PWS – Portal Workstation
Robot Framework – a generic framework for automating testing
SAPP – System Applications
SikuliX – uses image recognition for automation of GUI elements
I. Introduction
The objective of this project was to automate the testing of software in the LCC. This was done by creating
a program that would interact with the GUI elements as if a user was sitting at the computer. This software reduces
the amount of redundancy and wasted time of the engineers operating the tests. Once tested, the software displays
any errors found in a sorted manor. Another benefit to this kind of testing, is that the testing can be done constantly.
Instead of having to wait three to six months to test all the software together, this automated testing can be done
every night and alert the programmer to bugs the next morning. The long term goal of this system is to integrate it
with the Jenkins server.
II. Responsibilities
As a group, our responsibility for the project was to prove that this kind of automation system can be used
by getting it to work for just one test. We were tasked with getting the software to run correctly and then deploy it
for further testing in the sets in the firing room. Because my main project was to automate JUnit tests for DSF, I was
called into this project to help as needed. This mainly involved being asked to write programs when there was no
preexisting code, especially if the code needed to be written in Java or one of its dependencies.
III. Training
The training included all aforementioned training needed to start the internship. Specific to this project, we
used online classes to support our background knowledge of bash and python scripting. I already had access to
AccuRev and ClearQuest. Learning SikuliX and Robot Framework was much less formal and was involved using
the program and following online tutorials to familiarize ourselves with the software.
7. NASA NIFS – Intern Final Report – Nicole Maguire
IV. Approach
Since my involvement with the team was as needed, I was mainly called into help when something was
wrong. Early into the development of the SikuliX environment the Robot Integrated Development Environment
(RIDE) was failing. There was a dependency needed to build the program, but our machines were using an older
version of the Red Hat operating system (OS) that didn’t have it. The program that was failing was supposed to get
the process identifier (PID) of the operating system. To fix this I wrote two programs. The first was a java program
that got the PID through a different method that was compatible with our OS and then parsed it into the String
needed by RIDE. The other was a script in jython, a java dependency language that allows for scripting, that
integrated the created program into the actual IDE.
My most significant contribution to the project was a program I wrote called IMPERIO (IMPERIO Png
Emulation/Replication for Image Output). One of the biggest problems encountered while trying to automate the
system came from text recognition. In order for SikuliX to click on something, it must first be shown a picture of
exactly what it must click on. This was a problem for text because the font, size, or actual text could differ from
what we were able to get a picture of. The solution to this problem was IMPERIO. IMPERIO takes in the desired
text and creates a PNG file that mimics what the SAPP dashboard would produce.
For example, if the desired text was “Sample_Text”, the program would output the following three images:
Sample Text Displayed by IMPERIO
The team was able to give me a picture of what the dashboard might look like. From this I was able to
recreate the exact colors of the dashboard. The program creates three almost identical images that differ only by
background color. Because of the way that the different lines are tinted on the dashboard, it wouldn’t be feasible to
know the color of the background prior to testing. Once I had the program producing the three images how I wanted,
I needed to make IMPERIO more versatile. Because it was running with a script, it had to be easily run through the
command line. I made the entire file a jar that could be run with the following command.
Bash command to run IMPERIO with default font and size
The first argument ‘sample’ is what the file will be named. The second argument “Sample_Text” is what
the intended text of the program will be, and the final argument “~/Desktop” is where the program will be saved
relative to the current directory.
The next step was for the program to be able to output different font sizes and different fonts. To get the
font to the correct one used on the dashboard I had to search through the source code. I set this source code as the
default, but still allow for the font to be changed if needed. I did the same for the font size. Running the same
example as above, but with a specified font and font size would look like the following where “Courier” is the font
argument and “50” is the argument for font size:
Bash command to run IMPERIO with a specified font and size
The output from this command is the text below. It has the same message, background, and formatting, but
this time is a noticeably different font and size.
8. NASA NIFS – Intern Final Report – Nicole Maguire
Output from IMPERIO with specified font and size
The major problem for this project was that the machines that the team and I were programming on did not
come with the font that was being used on the actual sets in the firing rooms. The dashboard was telling our
computers to display a font that we did not have, so our computers defaulted to a separate one. This was a major
problem for the rest of the team who had previously been capturing images of the text as it appeared on their
machines to tell SikuliX what to look for. These same images could not be moved to the firing rooms as hoped. The
solution to this problem was what made IMPERIO so helpful for the entire project. The text font wasn’t hardcoded
in, so it could be changed during runtime of the program. So if my machine defaulted the text, the program
mimicking the text would too. The text would run in either situation.
V. Conclusion
The solution to RIDE was helpful at the time, but the team drifted away from using an actual IDE to
develop their automation. IMPERIO was chosen as the preferred way of dealing with text over other recognition
tools because it performed better than any other software that the team could find. After testing with SikuliX and
Robot Framework, the programs can match the text with no difficulty detected so far. Once IMPERIO passes code
review it will be uploaded to AccuRev.
VI. Acknowledgements
I thank Jason Kapusta and Paul Kuracz, my technical points of contact, for always being there when we needed
help or something was broken. I would like to acknowledge the rest of the interns on my team: Jacob Huesman,
Beth Dube, Kyle Besser, and Josh Connolly. Again, I thank Caylyne Shelton and Jamie Szafran for their
mentorship.