The document describes the structure and workflow of the Bareon functional testing project. It discusses the Python APIs used to control services on the controller and slave nodes, as well as run functional tests in parallel across multiple slave nodes. Tests can validate ramdisk functionality alone or involve rebooting to a tenant image. Logs from the agent and tenant image are sent back to the controller. Tests are grouped and configured to share resources and test specific images, firmware, or nodes. Parallelism is achieved through running tests simultaneously across virtual or physical slave nodes managed by the controller.
2. Bareon-func-test project structure (formely fpa-func-framework)
- Python API to start services on controller
- Python API to upload custom stub images, firmwares, etc
- Python API to control slave nodes
- /etc/bareon-func-test.conf
- virsh creds
- IPMI creds
- degree of parallelism (number of slaves)
- slaves are pooled
- optional DHCP, TFTP, PXE, HTTP params
Bareon project structure (formely fuel-agent)
- code
- unit tests
- run by tox in unit-tests env
- functional tests
- run by tox in func-tests env
- import bareon-func-framework
- setUp, tearDown are written using API provided by bareon-func-test
- default slave lab configuration (disk space, CPU’s, RAM) defined in base setUp.
- can be overridden for particular test, using bareon-func-test API.
- functional tests run if /etc/bareon-func-test.conf present, otherwise skipped.
4. bareon-func-test lab
Controller (one, VM or BM):
- initial node where bareon is fetched to run tests.
- hosts DHCP, TFTP, PXE and fake image service.
- all workers share single set of services
- spawns slaves (using python virsh bindings, or preconfigured BM nodes)
- manages pool of slaves
- executes tests using available number of slaves
- drives FPA in every test (ssh-ing to slave)
- if too much code overlaps with Ironic itself, may be based on Ironic
Slave node (many, VM or BM or both):
- depending of /etc/bareon-func-test.conf:
- can live inside controller (nested virt)
- can live on the same level with controller (networks for not nested?)
- can be a BM server
- booted via PXE
- runs ramdisk with agent
- runs tests (one by one, driven by controller)
5. Single test (ramdisk only)
Inputs:
- ramdisk build
- one slave node
- provision.json json
- a set of commands and params to execute on verify step
- lsblk
- parted
- etc
- expected output json
- optional params (inherited from base test case if not specified):
- a custom image
- a custom firmware
- etc
Outputs:
- ramdisk log
- passed: True/False
- in future:
- performance grade (basing on statistics)
6. Single test (involving reboot to tenant image)
Inputs:
- ramdisk build
- a special tenant image with callback and built-in key
- one slave node
- provision.json json
- a set of commands and params to execute on ramdisk verify step
- expected ramdisk output json
- a set of commands and params to execute on tenant image verify step
- expected tenant image output json
- optional params (inherited from base test case if not specified)
Outputs:
- ramdisk log
- tenant image boot log
- passed: True/False
- in future:
- performance grade (basing on statistics)
7. Logs
- both agent logs and tenant image logs are sent to controller, and published
- logs are sent continuously (where possible) to be able to trace possible kernel panic, etc
8. Group of tests (Test class)
- Share the same setup config:
- request specific image
- request specific firmware
- request specific node
- multiple disks
- existing data to test preserve
9. Parallelism
- Single controller node
- a shared PXE, TFTP, HTTP for all slaves
- Multiple virtual slave nodes (basing on config)
- spawned on demand using Python virsh bindings
- power management via virsh
- Multiple baremetal slave nodes (basing on config)
- need to set IPMI creds in config
- spawned via IPMI
- power management via virsh
- if we want to host a few labs in parallel (test a few ramdisks at time) need to split available
HW nodes between labs
- Parallel test execution is done via testr (OpenStack standard testing tool)
- we configure processes=number_of_slaves