2. Tempest Design Principles that we strive to live by.
● Tempest should be able to run against any OpenStack
cloud, be it a one node devstack install, a 20 node lxc
cloud, or a 1000 node kvm cloud.
● Tempest should be explicit in testing features. It is easy to
auto discover features of a cloud incorrectly, and give
people an incorrect assessment of their cloud. Explicit is
always better.
● Tempest uses OpenStack public interfaces. Tests in
Tempest should only touch public OpenStack APIs.
● Tempest should not touch private or implementation specific
interfaces. This means not directly going to the database,
not directly hitting the hypervisors, not testing extensions
not included in the OpenStack base. If there are some
features of OpenStack that are not verifiable through
standard interfaces, this should be considered a possible
enhancement.
● Tempest strives for complete coverage of the OpenStack
API and common scenarios that demonstrate a working
cloud.
● Tempest drives load in an OpenStack cloud. By including a
broad array of API and scenario tests Tempest can be
reused in whole or in parts as load generation for an
OpenStack cloud.
● Tempest should attempt to clean up after itself, whenever
possible we should tear down resources when done.
● Tempest should be self-testing.
3. In terms of software architecture, Rally is built of 4 main components:
● Server Providers - provide servers (virtual servers), with ssh access, in one L3 network.
● Deploy Engines - deploy OpenStack cloud on servers that are presented by Server Providers
● Verification - component that runs tempest (or another specific set of tests) against a deployed cloud, collects results & presents
them in human readable form.
● Benchmark engine - allows to write parameterized benchmark scenarios & run them against the cloud.
4. Typical cases where Rally aims to help are:
● Automate measuring & profiling focused on how new code
changes affect the OS performance;
● Using Rally profiler to detect scaling & performance issues;
● Investigate how different deployments affect the OS performance:
● Find the set of suitable OpenStack deployment architectures;
● Create deployment specifications for different loads (amount of
controllers, swift nodes, etc.);
● Automate the search for hardware best suited for particular
OpenStack cloud;
● Automate the production cloud specification generation:
● Determine terminal loads for basic cloud operations: VM start &
stop, Block Device create/destroy & various OpenStack API
methods;
● Check performance of basic cloud operations in case of different
loads.
5.
6. ● Uses decorators instead of naming conventions.
● Allows for TestNG style test methods.
(@BeforeSuite, @AfterSuite, @BeforeTest, @AfterTest, @BeforeGroups,
@AfterGroups...)
● Allows for explicit test dependencies and skipping of dependent tests on failures.
(@test(groups=["user", "user.initialization"],
depends_on_groups=["service.initialization"]))
● Runs xUnit style clases if desired or needed for backwards compatability.
(setup_module, teardown_module...setup_method, teardown_method)
● Uses Nose if available (but doesn’t require it), and works with many of its plugins.
● Runs in IronPython and Jython
Proboscis
10. Features
● Detailed info on failing assert statements
(no need to remember self.assert* names);
● Auto-discovery of test modules and
functions;
● Modular fixtures for managing small or
parametrized long-lived test resources;
● Can run unittest (or trial), nose test suites
out of the box;
● Python2.6+, Python3.2+, PyPy-2.3, Jython-
2.5 (untested);
● Rich plugin architecture, with over 150+
external plugins and thriving community;
14. get JOB info in json format:
http://localhost/jenkins/job/job_name/build_number/api/json
http://localhost/jenkins/job/job_name/api/json
from jenkinsapi.jenkins import Jenkins
def get_server_instance():
jenkins_url = 'http://jenkins_host:8080'
server = Jenkins(jenkins_url, username='foouser',
password='foopassword')
return server
"""Get job details of each job that is running on the Jenkins instance"""
def get_job_details():
# Refer Example #1 for definition of function 'get_server_instance'
server = get_server_instance()
for job_name, job_instance in server.get_jobs():
print 'Job Name:%s' % (job_instance.name)
print 'Job Description:%s' % (job_instance.get_description())
print 'Is Job running:%s' % (job_instance.is_running())
print 'Is Job enabled:%s' % (job_instance.is_enabled())
https://github.com/Betrezen/unified_tes
t_reporter/blob/review/unified_test_rep
orter/unified_test_reporter/providers/jen
kins_client.py
15. 1 Test Case. It is formal description of test where we have
title, description, step to reproduce, expected result,
Milestone, Test Group, Priority, Estimate. Example:
https://xyz.testrail.com/index.php?/cases/view/6633
2. Test Suite. It is group of test cases which combined
together logically. Example:
https://xyz.testrail.com/index.php?/suites/view/12
3. Test or Test result. It is test and test results combined
together. Actually test is related to corresponding test case
and main idea is to keep execution test result under this
entitiy. Example:
https://x.testrail.com/index.php?/tests/view/2741867
4. Test Run. It is combination of tests which are going to be
executed and all corresponding test cases (we remember
relationship between test case and test result) belong to one
test suite. Actually Test run is test suite execution result which
has been done for rerquested milestone, iso build,
enviroment. Example:
https://x.testrail.com/index.php?/runs/view/6490
5. Test plan. It is combination of test runs. Actually one new
plan is creating per each new iso build. Example:
https://x.testrail.com/index.php?/plans/view/6484
6. Test report. It is combination of test runs. Example:
https://x.testrail.com/index.php?/reports/view/8319
16. http://docs.gurock.com/testrail-api2/start
get_projects: GET index.php?/api/v2/get_projects
get_plans: GET index.php?/api/v2/get_plans/:project_id
get_runs: GET index.php?/api/v2/get_runs/:project_id
get_suites: GET index.php?/api/v2/get_suites/:project_id
get_tests: GET index.php?/api/v2/get_tests/:run_id
get_results: GET index.php?/api/v2/get_results/:test_id
get_results_for_run: GET index.php?/api/v2/get_results_for_run/:run_id
https://x.testrail.com/index.php?/api/v2/get_cases/3&suite_id=12
{"id": 10602 "title": "Create environment and set up master node" "section_id": 94 "template_id": 1 "type_id": 1 "priority_id": 4
"milestone_id": 10 "refs": null "created_by": 4 "created_on": 1424360830 "updated_by": 4 "updated_on": 1453783245
"estimate": "3m", "estimate_forecast": "21m18s" "suite_id": 12….}
17. from launchpadlib.launchpad import Launchpad
launchpad = Launchpad.login_anonymously('just testing', 'production')
https://github.com/Betrezen/unified_test_reporter/blob/review/unified_te
st_reporter/unified_test_reporter/providers/launchpad_client.py
class LaunchpadBug():
"""LaunchpadBug.""" # TODO documentation
def __init__(self, bug_id):
self.launchpad = Launchpad.login_anonymously('just testing',
'production',
'.cache')
self.bug = self.launchpad.bugs[int(bug_id)]
@property
def targets(self):
return [
{
'project': task.bug_target_name.split('/')[0],
'milestone': str(task.milestone).split('/')[-1],
'status': task.status,
'importance': task.importance,
'title': task.title,
} for task in self.bug_tasks]
@property
def title(self):
""" Get bug title
:param none
:return: bug title - str
"""
return self.targets[0].get('title', '')
The top-level objects
The Launchpad object has attributes corresponding to the
major parts of Launchpad. These are:
1) .bugs: All the bugs in Launchpad
2) .people: All the people in Launchpad
3) .me: You
4) .distributions: All the distributions in Launchpad
5) .projects: All the projects in Launchpad
6) .project_groups: All the project groups in Launchpad
Example
me = launchpad.me
print(me.name)
people = launchpad.people
salgado = people['salgado']
print(salgado.display_name)
salgado =
people.getByEmail(email="guilherme.sa
lgado@canonical.com")
print(salgado.display_name)
for person in people.find(text="salgado"):
print(person.name)