2. Table of
CONTENTS
Introduction.............................................................................1
The Context.............................................................................1
Test Framework .......................................................................3
Initial Tests .............................................................................4
We Are Testing ........................................................................6
Observations and Challenges ................................................... 10
Login Example Execution......................................................... 12
Additional Tests ..................................................................... 16
Additional Observations........................................................... 22
Remarks ............................................................................... 23
Appendix: Additional Test Results............................................. 24
3. INTRODUCTION
The need for high performance in-memory, as well as distributed data
persistence and interaction, is a key architectural concern in this age of digital-
everywhere.
The GridGain1 suite of products addresses this need. This product can be
installed as a free version (Apache Ignite) or as a fully commercial one. This
article uses the commercial version installed as a SAAS solution, GridGain
Nebula.
Once the basic product is installed the user has access to a Control Centre and
Dashboard to manage the configured nodes in their cluster in the cloud. We will
focus on this Dashboard in this Test Automation exploration. Of necessity, we
must use the public application rather than an on-project, product increment
which is heading toward deployment.
So, buckle up and off we go to the Nebula…
Disclaimer:
The Author has no commercial connection with GridGain, does not represent
GridGain and all the statements given here cover its product’s use in a specific
Test Automation context.
THE CONTEXT
The GridGain documentation describes the Nebula product in the following
terms:
GridGain Nebula is a cloud-native fully-managed service for Apache Ignite
and GridGain Platform. Nebula eliminates the complexity of configuring,
provisioning, managing, optimizing and scaling Apache Ignite in cloud or
hybrid environments. You can spend less time managing the Ignite
instances and more time focused on building high-performance and
scalable applications.
In this article, we are not focused on the underlying data- and performance-
intensive characteristics of GridGain, but on exploring the Test Automation
challenges that arise when we deal with the Nebula Control Centre/Dashboard
that forms part of the product offering. These components are accessed via an
active account login (https://portal.gridgain.com/auth/signin):
1
GridGain: GridGain — Extreme Speed and Scale for Data-Intensive Apps | GridGain Systems
4. 2
which, after clicking “SignIn” with valid credentials, leads to:
Where the sidebar menu shows the selected item “Dashboard”, the associated
data for which is displayed in the main area of the page using a series of small
panels, so-called Gridster panels.
Selecting the items in the individual sidebar menu items gives rise to a new
display in the main area. For example, if we select “SQL”, we then see:
5. 3
TEST FRAMEWORK
The framework used in this exploration is one developed by the author that has
been used and adapted over a number of years. It is one based upon
Gherkin(BDD)/Java/Selenide/IntelliJ. It caters for i18n as well as small-scale
data needs. It should be noted that as far as can be ascertained, the current
target application does not have a multi-lingual capability.
In addition, validation logging is built-in and can be turned on or off via a
command line parameter. This type of logging can be helpful when reviewing
the BDD-based tests and understanding what is actually being asserted in any
given test (3 Amigo discussions). Additional parameters specify the application
name, important for multi-application testing projects, and whether the target
browser should be run headless, as would be necessary in a CI/CD pipeline.
A key architectural aspect of the framework is how it, by design, separates
concerns – BDD expresses user journeys, step definitions provide the concrete
programmatic workflow to enact the BDD, and Page Objects which provide the
fundamental services for use by the step definitions. In this structure, only the
Page Object classes contain code involving element locators, Selenide
statements that actually perform the relevant operations and the like. All the
Page Object classes follow the same pattern, thus improving the efficiency of
maintenance. Adding new tests is a breeze, another key positive feature.
The project file is based on the Maven build/provisioning approach.
6. 4
INITIAL TESTS
In keeping with the test automation framework pattern we are using, and which
has been used in previous articles, our tests start out as a sequence of BDD
statements, reflecting a User Journey in the application.
As a first step, let’s perform both good and bad login operations – the “Hello
World!” of test automation. The BDD for these tests is as shown below:
Credentials obfuscated.
As can be seen the tests are grouped into a Gherkin Feature which contains a
Background section. The individual Scenarios are expressed as outlines since
they have associated example tables. Note too that both the Feature and
7. 5
Scenario Outline levels have associated attributes which enable test campaigns
to be built at the command line level, accessible by a CI/CD tool such as GitLab
or Jenkins. In NEBULA_MAT_UI_25000.2, we are using a combination of “good”
(i.e. valid and related to the author) and “bad” email addresses and passwords.
In keeping with our test approach, we develop appropriate Page Objects
reflecting the target application's visible “pages”. These, in turn, can “contain”
components. So, for example, in the case of the Dashboard Page, displayed
after we have successfully logged in, we have the following structure:
Page Name Component Name Comments
Nebula_DashboardPage Nebula_DashboardTitleStripComponent This represents the title strip
at the top of the Dashboard
page. Differs from the SignIn
title strip
Nebula_DashboardSidebarMenuComponent This represents the vertical
sidebar menu at the left side
of the Dashboard page
Nebula_ClusterGridsterComponent This represents the main area
of the Dashboard when the
user selects the sidebar
menu item “Cluster”
Nebula_DashboardGridsterComponent This represents the main area
of the Dashboard when the
user selects the sidebar
menu item “Dashboard”
Nebula_AlertingGridsterComponent This represents the main area
of the Dashboard when the
user selects the sidebar
menu item “Alerting”
Nebula_TracingGridsterComponent This represents the main area
of the Dashboard when the
user selects the sidebar
menu item “Tracing”
Nebula_SQLGridsterComponent This represents the main area
of the Dashboard when the
user selects the sidebar
menu item “Dashboard”
Nebula_ComputeGridsterComponent This represents the main area
of the Dashboard when the
user selects the sidebar
menu item “Compute”
Nebula_DeploymentGridsterComponent This represents the main area
of the Dashboard when the
user selects the sidebar
menu item “Deployment”
Nebula_CachesGridsterComponent This represents the main area
of the Dashboard when the
user selects the sidebar
menu item “Caches”
8. 6
In this exploration, we are not so interested in asserting the correctness of the
displayed information in the Gridster components (the panels shown in the main
area of the screen for any Sidebar menu item selection), although this would
certainly be of interest when we have an appropriate test application to
populate the nodes in our configured cluster.
WE ARE TESTING
A key aspect of any test automation framework is validation, essential to any
testing approach.
The Page Objects design is such that the possible set of items that can be
validated is specified by a set of constants, a fragment of those for the
Nebula_LandingPage Page Object are as shown below:
In addition, as required, non-textual type validation, e.g. item existence, can be
dealt with by methods, having the prefix “do”, such as typified by:
As can be seen above, each item on the page which can be textually validated
has an associated (BigInteger) value, assigned to an identifier with the prefix
9. 7
“validate”. These values are, in turn, wrapped in the same-named methods as
shown in the fragment below, for the SignIn Panel:
Note that “_validationSet” is a static BigInteger variable that is used as a “bit
mask” the value of which reflects the requested validations.
These fluent methods appear in the step definition file as exemplified in the
code associated with validating the SignIn Panel, shown below:
10. 8
As can be seen above, the items to be validated are signalled in sequence,
culminating in a call to “validatePage()”, a service offered by all pages, which
performs the requested validations, assembling any error bit mask for eventual
return (“errorMask”).
The “validatePage()” method in every Page Object follows a consistent pattern,
differing only in its detail. For example, in Nebula_SignInPanelComponent we see
code as exemplified in the following fragment:
In this block of code, we see the following:
1. At line 180: we check the static bit mask to establish if the request is to
perform the validation signalled by “validatePanelHeader”
2. At line 181: if the validation logger is instantiated, we log a message to
reflect what validation is being performed
11. 9
3. At line 182: we retrieve a translation key for the specific textual item, set
as a static string within the Page Object
4. At line 183: we retrieve the expected text from the translation service
using the translation key. The translation service is instantiated at the
start of a test using the in-scope spoken language
5. At lines 184-187: handle the case where the specified translation key is
unknown by the translation service
6. At line 189: we compare the actual textual value with the expected.
Sometimes because of the way the page is developed or for more subtle
reasons the direct “equals()” method might be replaced with
startsWith()/endsWith()/contains() as necessary.
7. At lines 190-191: in the case of a non-match the error bit mask, returned
to the step definition caller, is adjusted as well as an array of names
which are written to the execution console at the end of a test
12. 10
OBSERVATIONS AND CHALLENGES
A number of points can be made regarding the challenges encountered, and
overcome, in the development of tests for this (public) application.
1. DOM – the application appears to be using a Material UI (+ Angular?)
component library since many elements have tag names with the prefix
“mat-“.
2. Element location – looking at the DOM it would appear that there is no
test-specific attribute, e.g. as “data-test-id=” in React, in use to help
with element location, say, as part of a CSS selector expression. The use
of attributes such as “id=” is not widespread either. In this exploration,
the “gta=” attribute has been used to identify either actual elements of
interest or elements that serve as a base, starting point, of CSS
expressions.
3. Element state – in the situation where we need to assert the state of an
element as “selected”, for example, in the case of Sidebar menu items, it
is not possible to get the state directly from by an element attribute. To
establish the state of the menu items it was necessary to use the element
class name, which appears to change in a dependable way as the item is
selected/unselected. In general, the use of class names to locate or track
element states is to be adopted with caution. Some development tools
and frameworks can be configured to dynamically generate class names
which renders them useless for Test Automation purposes.
4. Popup Menu state – the User Menu, accessed by clicking the small
circular graphic on the far right of the Dashboard Title Strip, is very easy
to get displayed, simply click the graphic (button). Once on display, it
proved difficult to un-display. In the event, it was necessary to send the
ESC key to the first clickable menu item in the User Menu to dismiss the
overall popup menu panel.
5. Login Site – Our login tests initiate the process from both the main panel
that is shown as well as clicking the SignIn button on the Title Strip. It
seems that there is no technical difference in which route of logging in is
chosen.
14. 12
Login Example Execution
At this exploratory stage we run tests directly from the IntelliJ interface (once
the Run Configuration is set up):
Both our Login tests run to green.
NEBULA_MAT_UI_25000.1
And the associated validation log is a shown below:
18. 16
ADDITIONAL TESTS
The Login User Journeys threw up a couple of interesting observations, but let’s
go a bit further and see if additional points emerge.
To this end the additional Scenario Outlines are as shown below:
NEBULA_MAT_UI_25000.3, NEBULA_MAT_UI_25000.4
This first of our additional tests, NEBULA_MAT_UI_25000.3, is only a partial
workflow as it explores the resetting of our password. On-project, we might use
an appropriate trusted API in the “After” method of our test framework to reset
the password artificially back to its original, if this test is extended to actually
reset the password.
19. 17
In the second of the additional tests, NEBULA_MAT_UI_25000.4, we check that the
user can indeed logout of the application using the User Menu item available on
the far right of the Dashboard page.
The execution view and validation logs of these tests are provided in the
Appendix.
20. 18
NEBULA_MAT_UI_25000.5, NEBULA_MAT_UI_25000.6
In the first of these tests, NEBULA_MAT_UI_25000.5, the target assertion is that
the Profile page, accessed via the User Menu on the far right of the Dashboard
title strip, is as expected.
In this test, we pass data to the step (Line 77) where we actually do the
assertions regarding the data on the Profile page. The expected value of the
“phone” element is specified by means of the mnemonic “[blank]”. In the step
definition, code behind, we need to define a method carrying a special attribute,
that is recognised at runtime which specifies this mnemonic will be used to
signal an empty string in Data Tables. This special method is shown below:
For certain Country/Region selections, GDPR-related input is requested on the
Profile page. The Country/Region for the defined login credentials is specified in
the table by the data item in column “country-region”. The two possible views
of the Profile panel are shown below:
21. 19
GDPR Consent Required
GDPR Consent Not Required
In the Page Object representing the Profile panel, Nebula_ProfilePage, we need
to examine the “country-region” data item and apply the GDPR-related
validations accordingly.
22. 20
The key method “findCountryRegionByDisplayedText” is defined in the Profile
page in the form as shown in the fragment below:
23. 21
The execution view and validation logs of this test is provided in the Appendix.
In the second of our additional tests, NEBULA_MAT_UI_25000.6, the User Journey
ensures that the Support page, accessed by a link in the Dashboard page Title
strip, is as expected. As in the previous test, we see data being passed to the
step definition file at Line 94.
This test asserts that we see the data specified in the Data Table on the page as
well as that the Subject and Message fields are empty. We also confirm that
after successful validation the clicking of the Browser back button does indeed
take us back to the Dashboard page.
24. 22
ADDITIONAL OBSERVATIONS
With the additional tests there arose additional observations. These can be set
out as below:
1. Element Location - The value of the “id” attribute of DOM elements on
the panels explored appeared to be often blessed with a dynamic postfix.
This meant that this attribute wasn’t very useful in element selection. This
was particularly the case for element collections.
2. Textbox Value - Retrieving the content of text entry fields required the
use of the Selenide “getValue()” method rather than “getText()”.
3. GDPR Regulations - The GDPR regulations played a role in what is
shown on the Profile page. This meant that the expected data item
“country-region” needed to be used in conjunction with a page-specific
enumeration to activate additional validations.
25. 23
REMARKS
The exploration of the GridGain Nebula UI thew up several interesting
observations.
Perhaps the most important ones relate to element location. In some parts of
the DOM there seemed to have been a conscious use of the “gta” attribute as a
reliable element location mechanism, whereas in other parts this attribute is
absent. Elsewhere, the “id” attribute appears but sometimes with a postfix part
which seems to be generated. Adopting a common approach would certainly
help the test automation process.
Whilst the tests shown here have certainly reflected some very relevant User
Journeys, some important product-relevant aspects have been pended. For
example, the detailed validation of the central area of the Dashboard page for
all the Sidebar menu item selections remains “TBD”. In addition, making
concrete assertions about the target of some of the User Menu item selections
(identified here by their mnemonic), “dark-mode”, “billing”, “teams-mgmnt”,
“whats-new” and “help” also remain outstanding.
Additionally, it should be re-stated that using a public url for testing does not
reflect what should be the approach for testing a web application such as
Nebula.