SlideShare a Scribd company logo
1 of 100
Download to read offline
1
MMC501 INDIVIDUAL PROJECT
MEng Product Design Engineering
DEVELOPMENT OF A SOFTWARE
INTERFACE FOR AUTOMATION
APPLICATION
FINAL REPORT
2015/16
Michelle Okyere
B126700
Supervisor: YM. Goh
2nd
Reader: R S. Bhamra
I
STATEMENT OF ORIGINALITY
“This is to certify that I am responsible for the work submitted in this report, that the
original work is my own except as specified in references, acknowledgements or in
footnotes”.
PRINT NAME: MICHELLE OKYERE
SIGNATURE:
DATE: 03/05/2016
II
Acknowledgements
I would like to say thank you to Dr Yee Mey Goh who first of all created this project
for me so that I would have the opportunity to learn new software skills. She has
supported me throughout the entirety of it allowing me to meet my aims.
I would also like to say thank you to Yuchen Zhao, who has offered support and
guidance at every hurdle that I faced throughout the duration of the project and has
helped to deepen my knowledge in the field of Intelligent Automation.
I would also like to thank Ran Bhamra for his valuable recommendations on
improvements to the project that lead to it being a success.
Finally, I would like to thank my parents, without their love and continued support; I
could not possibly be successful.
III
Abstract
The field of Industrial Robots has advanced significantly in recent years. Despite this,
a robots ability to adapt to their environment in the way that a human does is very
limited. Robot learning allows them to acquire skills or adapt to their environment.
Ideally, it should be possible to extract the skill-based performance from human
demonstrations to transfer those skills to robots. The lack of convenient and reliable
force related measurements is slowing down the advancement of robot learning. A
potential solution to this is Surface Electromyography, a non-invasive and
inexpensive method of measuring surface muscular activities. The output signal has
a strong and stable relationship to the force exerted by muscles.
A model can be produced which allows the Force/Torque from each muscle
movement to be predicted by the muscular activation signals. This information can
be transferred to a robot, so that it can better imitate the motions produced by the
human arm.
However, due to the noisiness of the raw signals, they must be processed in four
different stages. The purpose of this project was to design and build software that
carries out all the stages of the signal processing in one programme. The result of
the project is Veribus, a software application that allows the user to successfully
carry out all the stages of said signal processing and save their results.
IV
Glossary
Term Definition
Action Potential The signal transmitted by a neuron when it goes from a resting
state to stimulated.
Algorithm The procedure or formula required to solve a problem.
Aliasing A phenomenon in digital sound, in which static distortion
occurs, resulting from a low sampling rate that is less than
twice the highest frequency present in the signal.
Alpha-Motor
Neuron
A motor neuron that sends messages from central nervous
system to initiate and sustain voluntary, conscious movement.
Attenuate (decibels,
dB)
The diminishing of signal strength during transmission.
Analogue Signal A type if signal which varies continuously in frequency and
amplitude.
Application (App) A type of software that allows a specific task to be performed.
Often referred to as an “App”.
Application
Program
Interface(API)
A set of routines, protocol and rules required for the building of
a software application
Closed form A mathematic expression that only contains a finite amount of
symbols and only include commonly used operations
Covariance Provides a measure of the strength of the correlation between
tow of more sets of random variates.
V
DC offset The term used to described when a waveform has an uneven
amount of signal in the positive and negative domains
Decibels(dB) The measure of a sound level.
Depolarisation A cell that is more positively charged on the inside than the
outside
Digital Signal A series of pulses confined to two states, ON or OFF.
Represented in binary code as 0 or 1.
Eigenvalue The scalar value associated with a given linear transformation
of a vector space
Eigenvector A special set of non-zero vectors associated with a linear
system of equations.
Event (in a software
engineering
context)
An identifiable action that is carried out by the user of software
such as clicking or by the system, such as an error message.
Form (in a software
engineering
context)
A platform used to build a user interface which proves a
variety of controls such as text boxes and buttons.
Fourier Series Used in Fourier analysis to represent an expansion of a
periodic function.
Impulse Response The output signal that results when an impulse is applied to
the system input.
Motor-Neuron A nerve cell which carries electrical signal to a muscle,
triggering it to relax or contract.
Network topology A schematic description of the arrangement nodes and
VI
connecting lines of a network
Paradigms A framework containing basic assumptions, rules and
methodologies generally accepted by a scientific community.
Piezoelectric The appearance of an electrical potential across the sides of a
crystal when it is subjected to mechanical stress.
Transfer Function The relationship between the input and output signal.
Truncated To cut data off abruptly, beyond a certain value.
VII
Contents
	 Introduction ..........................................................................................................1	
1.1	 Aim................................................................................................................2	
1.2	 Objectives .....................................................................................................2	
This aim can be achieved by meeting the following objectives: ..............................2	
	 Literature Review .................................................................................................4	
2.1	 Robot Learning by Demonstration ................................................................4	
	 Need for Programming by Demonstration .............................................5	
	 Current State PbD .................................................................................6	
2.2	 Capturing Human Skills ................................................................................7	
	 Methods of Capturing Human Skill ........................................................7	
2.3	 Signal Processing .......................................................................................14	
	 Data Rectification.................................................................................14	
	 Filtration ...............................................................................................15	
	 Normalisation.......................................................................................23	
	 Principle Component Analysis .............................................................25	
	 Artificial Neural Network (ANN)............................................................27	
	 Case study application: ......................................................................................30	
3.1	 The Equipment............................................................................................30	
	 The Thalmic Labs Myo Armband Sensor ............................................30	
	 The xsens MTw Wireless motion tracker.............................................30	
	 The 6-axis ATI Force/Torque Sensor ..................................................31	
3.2	 Experimental Method ..................................................................................31	
	 Primitive Calibration Stage: .................................................................31	
	 Joint state data collection ....................................................................32	
	 Getting reference data for sEMG signal ..............................................33	
	 Getting reference for F/T signal ...........................................................34	
	 Peg-In-Hole Experiment ......................................................................34	
	 Section 3: Software Design................................................................................36	
4.1	 Purpose of the software ..............................................................................36	
4.2	 Software Requirements: .............................................................................36	
	 Stage 1-Signal Processing ..................................................................36	
	 Stage 2- Final Model Production and Artificial Neural Network ...........36
VIII
	 Stage 3: Launch Artificial Neural Network program.............................37	
	 Method Investigation ..........................................................................................38	
5.1	 Matlab: ........................................................................................................38	
	 Trial of method.....................................................................................39	
5.2	 Visual Basic with C# Code..........................................................................40	
	 Trial of method.....................................................................................41	
5.3	 Visual Basic with Matlab: ............................................................................42	
	 Matlab Coder: ......................................................................................42	
	 Dynamic Data Exchange (DDE) ..........................................................43	
5.4	 Methodology Comparison ...........................................................................43	
	 Matlab Only..........................................................................................43	
	 Visual Basic Only.................................................................................44	
	 Matlab Coder-C++ Generator ..............................................................45	
	 Dynamic Data Exchange .....................................................................45	
5.5	 Conclusion of Methodology:........................................................................45	
	 User Interface Design ........................................................................................46	
6.1	 Usability.......................................................................................................46	
6.2	 Aesthetics ...................................................................................................47	
	 Layout ..................................................................................................47	
	 Colour scheme.....................................................................................49	
	 Description of Software ......................................................................................52	
7.1	 Introduction to Software ..............................................................................52	
7.2	 Signal Processing .......................................................................................53	
	 Discussion-Evaluation of Software.....................................................................64	
8.1	 Improvements to Product............................................................................65	
	 Project Management ..........................................................................................67	
	 Conclusion......................................................................................................68	
10.1	 Meeting of Objectives .................................................................................68	
10.2	 Concluding remarks ....................................................................................70	
	 Works Cited .......................................................................................................I	
	 Appendix I..........................................................................................................I	
12.1	 Objectives Form.............................................................................................I	
12.2	 Initial Gantt Chart ..........................................................................................II
IX
12.3	 Actual Gantt Chart ......................................................................................III	
	 Appendix II.........................................................................................................I	
13.1	 Software Design Flowcharts ..........................................................................I	
	 Overall flow of software ..........................................................................I	
	 sEMG Signal Processing ONLY .............................................................I	
	 Force/Torque Signal Processing ONLY.................................................II	
13.2	 User Interface Design Mood Board..............................................................III	
13.3	 User Interface Design-Review of Existing Software ................................... IV	
13.4	 Advanced User Interface Designs ............................................................... V	
13.5	 Method Comparison Matrix......................................................................... VI	
13.6	 Force/Torque Signal Processing Stages ......................................................7
1
Introduction
Industrial Robots are an integral part of automation systems since they are now
involved in the manufacturing, assembly, packaging, inspecting and many other
aspects of the production and service industries. Technological advances have
allowed the development of intelligent automation to replace manual task, thus
leading to higher efficiencies, improved and more consistent quality of products and
safer working conditions for employees who can now avoid unsafe working
environments. An excellent example of this can be seen at Ford Motor Company’s
white assembly plant in Changan,
China, see Figure 1.1.1. The
assembly line here is highly
automated and the robot’s carry
out multiple processes such as
laser welding, spot-welding and
palletizing, of which would have
been previously carried out by
humans. (Chen, 2013).
Despite these advancements, a robots ability to adapt to its environment in the way
that a human does is very limited, as a result, robots are currently only able carry out
repetitive and uncomplicated tasks. However, if this aspect was to be improved, it
would open up the application of robotics to carry out more complex tasks usually
carried out by humans. Robot learning is a field that is searching for ways in which
robots can acquire skills or adapt to their environment through learning algorithms.
These skills include grasping and joint manipulation. (Monfared, Automation
Processes and Advanced Technologies, 2015). If successful, the manufacturing and
assembly industries could stop relying so heavily on manual work processes.
Trained humans are intrinsically good at handling complex situations. If the skills
from human can be numerically represented and understood by robot, even though if
it is not a hundred percent duplicated by the robot, it will become a strong source of
prior input knowledge. (Zhao Y. , Al-Yacoub, Goh, Justham, Lohse, & Jackson,
2015).
Figure 1.1.11 Ford’s Car Assembly Line in using ABB robots
(Chen, 2013)
2
PhD student Yuchen Zhao has been working on achieving this as a part of his work.
Surface electromyography (sEMG) is a method of recording the muscle electrical
activity performed by muscles. Zhao has been be recording these signals as well as
the Force/Torque (F/T) output produced by the muscles, with the purpose of
mapping them together. The aim is to produce a model that can predict the
Force/Torque output from the muscle activity, and then transferring this data to a
robot so that it can mimic the actions of a human more accurately.
A key issue of this project is that both of these signals are riddled with noise and
unnecessary data when first collected. In order to produce an accurate and reliable
predictive model, the signals are required to go through multiple stages of signal
processing in order to remove the noise and excessive components.
1.1 Aim
The purpose of this project is to design software that carries out all the stages of the
signal processing on a single platform. The central mission is to make the process of
human skill capture and transfer to robots easier by using F/T trajectories to control
robots.
1.2 Objectives
This aim can be achieved by meeting the following objectives:
Primary:
1. To attain a good understanding of the technical measuring process for
capturing human skills;
2. To attain a thorough understanding of the software interface requirements and
select appropriate programming languages;
3. To write the code for software which interfaces with the relevant applications
and processes the measurement data;
4. To create the user interface which allows the user to control key parameters
of the software;
5. To carry out tests on the software and optimise the software.
3
Secondary:
1. To explore options for further development of the software and its potential
other uses.
In order to fulfil these objectives, it is necessary to outline specific deliverables:
Primary:
1. Carry out an in-depth literature review
2. Carry out signal processing using various methods, to get a better
understanding
3. Research and find methods to evaluate
4. Trial and review each method
5. Write code to control the user interface and carries out the various stages of
signal processing
6. The software must be able to open ANN app within Matlab
7. Design user interface(s) that is linked to the code and allows the user to carry
out stages on their own data
8. Allow users to input their own data and review the performance of the
program
9. Keep making the necessary changes to ensure that the software has no bugs
and performs as desired.
Secondary:
1. Explore software’s ability to allow them to change the parameters of the signal
processing stages
2. Allow user to repeat experiment multiple times and save their data at the end
3. Allow the user to convert their raw data into the format that can be read by
software
4
Literature Review
This section is an in-depth review into three key topics that need to be understood in
order to meet the objectives stated. Relevant information from various sources for
each of them has been compiled and studied to develop a comprehensive
understanding.
1) Robot Learning by Demonstration: This is the industry that this project aims to
help to advance, particularly the Programming by Demonstration area.
2) Capturing Human Skills: The purpose of this project is to allow the skills used
by humans whilst performing certain tasks to be reliably transferred to a robot;
therefore it is crucial that the skills and methods of recording them are
understood.
3) Signal Processing: The skills captured from humans are produced in the form
of signals that have a large amount of noise, which will need to be removed a
reliable model can be created. This can be achieved through various methods
of signal processing which need to be explored before selecting the most
appropriate ones.
2.1 Robot Learning by Demonstration
According to Biggs and MacDonald, industrial robot programming methods can be
split into two classes: Manual and Automatic programming, see Figure 2.1.1. Manual
systems require the programmer to directly enter the desired behaviour of the robot
using graphical methods such as ladder logic diagrams, or through the use of text-
based programing languages. By contrast, in Automatic programming systems, the
program used to control the robot’s motions is automatically created; therefore the
user is controlling the robots behaviour but has no influence on the programming
code. (Biggs & MacDonald, 2003).
Figure 2.1.1: Manual (left) and Automatic (right) methods (Biggs & MacDonald, 2003).
5
Programming by Demonstration (PbD) is an automatic programming method and will
be the focus of this report. This concept was first conceived in the 1980’s and was
inspired by the way humans learn new skills by imitation. The idea attracted
researchers within the field of manufacturing robotics as a way of automating the
tedious manual programing of robots and reducing the costs associated with the
development of robots within a factory.
The aim of PbD is to extract the skill based performance from human demonstrations
and transfer those skills to robots. The demonstration can be performed in two ways,
the first is the traditional method where the robot is manually guided using a remote
control such as a teach pendant. This is a relatively common method that is easy to
learn. It is often adopted by shop-floor workers who would use it to control robotic
processes such as the programming of a robot welder.
The second is a more natural method that utilizes gestures and voice. It is done by
performing the task with absolutely no interaction with the robot, but recording the
movements of the operator using motion-capturing system. The latter method is
more advanced and flexible, and will be the area that this project aims to support.
Need for Programming by Demonstration
The importance of PbD has grown because the conventional methods of
programming a robot will become unfeasible when situations become too complex
which results in being reliant on human labor. Industrial robots are becoming
equipped with increasingly advanced technologies and capabilities such as laser
welding, as well as more complex hardware such as multiple sensor modalities,
programming a robot has become extremely complex. (Monfared, Automation
Processes and Advanced Technologies, 2015). For example using languages such
as C programming is too complex for everyone to learn, but the solution to this, using
teach pendants, disrupts production.
In industrial robotics, the goal is not only to reduce costs, but also to create or
assemble products far more efficiently than human operators could achieve. PbD is
deemed to be particularity useful when used to teach Service Robots used in
Human-Robot collaboration scenarios, for example, product inspection, where the
robot might be used to carry heavy parts. In this case, PbD goes beyond transferring
6
skills, but moves towards finding ways for the robot to interact safely with humans.
The robot needs to be able to recognize human motions and predict their intentions.
The need for PbD becomes has become even more crucial with the introduction of
Humanoid robots. These robots offer many benefits, namely being far more flexible
than the typical industrial robot, because of the multiple degrees of freedom they
have as a result of being designed based on the human form. However, their
introduction into industry presents even more challenges in terms of learning and
communication.
In comparison to industrial robots which are often only required to carry out tasks in
static environment, humanoid robots are expected to carry out tasks in dynamic
situations. These robots need to adapt to new environments, therefore the
algorithms used to control them need to be flexible and versatile. Due to the
continuously changing environments they operate in and the huge amount of tasks
that the robot is expected to perform; they need to be able to constantly learn new
skills and adapt their existing skills to new contexts. As a result, the humanoid is
expected to go even further and behave in a human-like manner with regards to
social interaction, gestures and learning behaviors. (Calinon, 2009). PbD could allow
this as it is a method designed to enable robot adaptability to new environments and
situations.
Current State PbD
Several programming systems and approaches based on human demonstration
have been explored since the introduction of PbD. However a key stage in this area
is capturing the skills a human uses to perform certain task in a way that they can be
transferred to a robot and allows them to learn.
Currently, the skill based tasks performed by a human operator is difficult to extract
and generalise so that a robot can understand and replicate the tasks. Researchers
are currently utilizing sensors such as magnetic markers and force plates to record
the trajectories of the human operator and encoding these sequences into models
which are then transferred to the robot for the robot to replicate the movements.
(Zhao Y. , Al-Yacoub, Goh, Justham, Lohse, & Jackson).Typically Industrial robots
today have Position and Force control at the end-effector. However these tools are
bulky and impractical when it comes to performing tasks on-line. There is need for
7
more agile methods that allow the process of skill capturing to be more flexible and
precise.
2.2 Capturing Human Skills
Throughout the Intelligent Automation industry, there is a growing demand for
methods to capture and classify the explicit and tactile skills used by human
operators whilst carrying out complex tasks. (Everitt, Fletcher, & Caird-Daley, 2015).
There are numerous reasons for wanting capture human skills. A common reason is
to understand the processes that intervene between stimuli and response, which
allows one to explain behaviour. Another reason is to explain a particular group of
human behaviour such as event detection or to describe the general mechanisms
that provide the basis for all behaviours such as working memory.
Describing the relationship between stimuli and responses also makes it possible to
design stimuli that produce the desired response. If the input-output relationships for
particular types of task can be measured, it makes it possible to adjust the task
parameters in a way that will produce a better output. It also provides the basis for
training others to exhibit the desired behaviours. In this case, the input-output
relationship must be understandable and executable by other humans.
The final reason is to capture human skills for the purpose of embedding the
information in a machine. (Rouse, Hammer, & Lewis, 1989). This will make it
possible to build an autonomous machine that can replace human activity in areas
that may be dangerous or when carrying out mundane but complex tasks. For
example, if the Force/Torque input from a human muscles can be measured whilst
they carry out a delicate assembly task, it might be possible to programme the robot
to apply this gentle force/torque, thus improving the quality of the product.
Methods of Capturing Human Skill
There are several methods that have been explored in capturing human skill, but
before selecting the most appropriate method, it is important to classify the task. The
tasks can fall into one of two categories: Discrete and Continuous tasks. A discrete
task is one that has a fixed beginning and end, for example, switching a button on or
off. A continuous task has no clearly defined beginning or end and often has an
objective to maintain a status in opposition to confounding influences for example,
steering a car. Previous research has found it extremely difficult to devise a skill
8
capturing method that will accommodate both the fluid nature of continuous task as
well as the rigid nature of the discrete tasks. (Everitt, Fletcher, & Caird-Daley, 2015).
The options available of capturing these tasks are described as follows:
1. Computer Vision-based methods
In general computer-vison (also referred to as Machine Vision) cameras are used to
optically sense the presence and shape of an object and the image is then
processed. The image acquisition is the first stage, where cameras, lenses and
lighting been designed for the purpose of providing the differentiation required for
subsequent processing are used to capture the processed being tracked. A
microprocessor is used which processes the image, usually within less than one
second; this image is then measured and the measurements are digitized. The
microprocessor uses various methods for image processing including edge detection
and neural network. (Monfared, Automation Processes and Advanced Technologies,
2015). Based on the results of the image processing, a decision is made, for
example, whether a part is faulty or acceptable.
Although this method is often adopted in assembly lines for quality inspection
amongst other applications, it can also be used for human tracking and gesture
recognition for the purpose of robot learning. The application of computer vision in
human skill capture comprise of three stages: detection, tracking and recognition.
Detection involves defining and extracting visual features that belong to the body
part in question, the hand for example, in the field of view of the camera(s). During
the tracking stage, sequential data association is performed between successive
image frames. Thus, at each moment in time, the system will be aware of the
presence and location of objects. Tracking also allows the estimation of model
parameters, variables and features that were not observable at a certain moment in
time. Finally, the recognition stage is the interpretation of the semantics that the
hand location, posture and gesture convey. (Zabulis, Baltzakis, & Argyros, 2009)
The data provided from this process can be transferred to a robot for it to repeat the
motions of the human limb. For example, an anthropometric robotic hand can
replicate the gestures performed by a human hand whilst picking up an object and
moving it another place.
9
2. Data Gloves
A Data Glove is an interactive electromechanical device worn on the hand, which
facilitates tactile sensing and fine-motion control in robotics and virtual reality. Tactile
sensing involves the continuous sensing of variable contact forces, using an array of
sensors. These sense the force being applied using strain gauges, piezoelectric
devices or magnetic induction. (Monfared, Automation Processes and Advanced
Technologies, 2015). Fine motor control
involves the use of sensors to detect the
movements of the wearer’s hands and fingers,
and the translation of these motions into
signals can be used for a robotic hand. (Rouse
M. , 2005). The gloves typically comprises of a
cloth material, with sensors sewn at each
degree of freedom, see Figure 2.2.1 . They can
be used to measure the responses from
various hand activities such as grasping.
A study took place at the Learning Algorithms and System Laboratory (LASA) ,
explored a new setup for a “sensorized” data glove. This showed that data gloves
make it possible to measure interaction forces of the hand as well the wearer’s
behaviours such as using their fingers in oppositions.. (R.L, Khoury, J, & A, 2014).
This information helps to provide more information about human grasping and
manipulation skills, which will allow them to be transferred to an anthropomorphic
robotic hand.
This data provides a more complete picture for study human grasping and
manipulation; these skills can then be transferred to anthropomorphic robotic hand
3. Biosignals
Biosignal are the signals (electrical and non-electrical) produced in living beings that
can be measured and monitored. There are two classes of Biosignals: Permanent
and Induced. Permanent Biosignals are those which always exist, even without
excitation from outside the body. Induce signals however, have to be triggered
Figure 2.2.1 A Cyberglove data glove with
Teskcan Tactile sensors (R.L, Khoury, J, &
A, 2014)
10
artificially and only exist at the time of excitation. There are multiple types of
Biosignal, some of the best known are:
3.1.Electromyography
A motor unit is comprised of a single alpha motor neuron and muscle fibres. The
motor neuron supplies the muscle with action potential, and when this reaches a
depolarization threshold, the muscle contracts. This depolarization produces an
electromagnetic field which is measured as a very small voltage, the EMG signal.
The signal reflects a strong and stable relationship to the force exerted by muscles
due to the electrical activities of the motor units.
Surface Electromyography (sEMG), is a non-invasive and inexpensive method of
measuring EMG signals, where electrodes are place over the skin over the muscle
being measured. Figure 2.2.2 is a schematic diagram of the typical set-up of EMG
signal acquisition.
The signal itself tends to be quite complex due its sensitive nature which makes it
easily affected by the anatomical and physiological properties of the muscles and the
instrumentation, used for the detection and recording of the signal. (Motion Lab
Systems, Inc, 2016).There is four types of noise sources that influence the output
raw signal:
1) Inherent noise from the electronics parts inside of the signal detection and
recording instruments used to collect the data;
2) The ambient noise from the electromagnetic radiation in the environment;
Figure 2.2.2 A schematic diagram of the sEMG set-up (Hossain, 2015)
11
3) The motion artefacts with electrical signals mainly in the frequency 0-20Hz
range from the electrode-skin interface;
4) The inherent instability of the EMG signal with unstable components in the 0-
20 Hz range that occurs de the quasi-random nature of the firing rate of the
muscular motor units. (Wang, Tang, & Bronlund, 2013)
As a result, the signal must first be processed to remove the excess noise and
render them suitable for analysis and interpretation.
Despite its sensitivity, the signal is still being used in many fields such as assistive
technology, rehabilitative technology, armbands for mobile devices and muscles
computer interfaces. A benefit of sEMG is that the sensors can be easily worn by the
user and are relatively cheap, thus making them very useful in capturing human
skills.
3.2.Mechanomyography
Mechanomyography (MMG) is a mechanical signal that can be detected when the
muscle does any activity; it been described as the mechanical counterpart to EMG
signals. The technique takes place using specific transducers to record muscle
surface oscillations that occur due to mechanical activity of the motor units.
MMG signals can be detected using several types of transducers including
piezoelectric contact sensors, microphones, an accelerometer and laser distance
sensors. Figure 2.2.3 shows the typical arrangement of MMG measurement using an
accelerometer as the transducer.
Figure 2.2.3. The typical set of MMG signal acquisition
(Islam, Sundaraj, Ahmad, & Ahamed, 2014)
12
MMG offers some notable advantages over sEMG, the first being due to its
propagating property through the muscle tissue, the MMG sensors do not need to be
placed in a precise or specific location. A more significant advantage is it is not
affected by change in the skin impedance due to sweating, because it is a
mechanical signal. (Islam, Sundaraj, Ahmad, & Ahamed, 2014).Both of these
reasons makes the acquisition of MMG signals far easier than acquiring sEMG
signals. However, the raw signal is still subject to noise and must therefore be
processed before it can be analysed and interpreted.
3.3.Electroencephalography
Electroencephalography (EEG) signals provide information about the spontaneous
electrical activity of the human brain. EEG signals are very popular in the analysis of
brain activity and determining the state of a human being. During the EEG procedure,
multiple small sensors are attached to the scalp, see Figure 2.2.4, to detect the
electrical signals produced when a brain cells send messages to each other. (Trust,
2015). Each signal is amplified and digitalized; it is then stored electronically.
During the recording of the signal, a series of procedures takes place to induced
normal or abnormal EEG activity such as eye closure, mental activity and sleep.
shows two EEG signals, the top is one of a normal signal and the bottom is the
signal that is produced when the patient closes their eyes.
Figure 2.2.4 Left: An EEG cap with multiple electrodes. Right: EEG readings on a monitor.
13
The raw EEG signal usually has numerous undesirable characteristics such as being
complex, noisy and non-stationary. Therefore, it requires specific signal processing
before it can be properly interpreted.
3.4.Electrocardiography
Electrocardiography (ECG) signal corresponds to the electricity activity of the heart.
When an electrical potential is generated in a section of the heart, an electrical
current is conducted to the body surface in a specific area. The ECG records
changes in magnitude and direction of this activity.
The recording takes place by placing electrodes over standard positions; typically the
patient’s chest and limbs see Figure 2.2.5 . These electrodes detect the changes in
the aforementioned current. The voltages are then amplified and recorded on ECG
paper as wave and complexes (Stansbury, Brufton, Richardson, & Lyons, 2015).
The raw signal is influence by
several factors which results in a
very noisy signal. Sources of
noise include lung sound and
breath as well as the electrode
contact noise. Therefore the raw
ECG signal must be processed
before they can be properly
interpreted.
Each of the described skill capturing methods have output signals that have a large
amount of noise and must therefore go through post-processing. If not, the
interpretation of the signal analysis may be incorrect. This is why there is a need for
software’s designed to reduce or eliminate noise, in a way that is user friendly.
When trying to capture human skills for the purpose of transferring them to robots,
the types of signals that are used are typically those which correspond to the
electrical actively of muscles movement. Therefore, since EEG and ECG methods
do not fulfil this requirement, they are not used in the human skill capture for
intelligent automation purposes.
Figure 2.2.5 (Bupa's Health Information Team, 2010)
14
Figure 2.3.1: The top is the raw signal, the
bottom is the result of full-wave
rectification. (Konrad, 2006).
If the skills can be extracted through means that require far less effort, it will make
robot learning far easier. In an industrial situation, for example a car assembly line, it
is necessary to track the force and torque output provided by the humans, and then
teaches the robot using this information. However, it is not feasible to place
Force/Torque sensors on each component of the car since they are bulky and not
dedicated. Therefore, there is a need have a reliable Force/Torque prediction whilst
carrying out these assembly tasks and solution is measure muscle activity since the
muscle generates the force, hence why the EMG signal has been selected. The
detection of sEMG is far easier since it only requires the use of lightweight and
portable sensors.
There is a need to enable the accurate prediction of the Force/Torque output from
sEMG signal. Signal processing is a vital part of producing a predictive model that is
accurate and usable; hence the need for a platform that allows this processing to be
achieved accurately and quickly. Once this has been achieved, this would remove
the need for bulky sensors during human demonstrations and makes the skill
extraction and post-processing process far easier, thus creating a more flexible
human-robot control interface.
2.3 Signal Processing
As mentioned previously, the processing of raw data signals is crucial in producing
an accurate predictive model for the Force/Torque output. There are many methods
of processing available, but this report will focus on the four methods recommended
by Zhao.
Data Rectification
Rectification is the first stage of processing the
raw data but it is only carried out the on the
sEMG signals since it is far noisier than the
Force/Torque raw signals. This stage
translates the raw sEMG signals into a single
polarity, which is usually means that all the
negative sEMG values are transformed into
positive ones. This stage is necessary
15
because the average of a raw sEMG signal is usually zero, therefore when an
attempt to smooth it takes place the result is just zero. (Rose, 2011)
This stage occurs by calculating the mean of the signal, it is then integrated and the
Fast Fourier transformation takes place to calculate the discrete Fourier transform.
This means that the signal is being transformed from its original domain, this case
time, into the frequency domain.
There are two kinds of rectification, the first is Full-wave which works by adding the
parts of the EMG signal that is less than zero to the value that are greater than zero,
thus resulting a compete positive signal. This is usually the preferred process and an
example can be seen in Figure 2.3.1.
Whereas Half-wave rectification discards all data less than zero. (McDonough's,
2008). This process is shown in Figure 2.3.2, where the dotted lines represent the
value less than zero which are deleted during the process.
Filtration
Due to the both signals sensitivity to various factors during the collection, it is subject
to a significant amount of noise that needs to be removed before it can be
interpreted correctly. Filtering is process in which frequencies of a specific range are
diminished whilst allowing to others to pass, thus limiting the frequency spectrum of
the output signal.
Digital Signal Filters operate on discrete-time signals, despite being an analogue
signal, sEMG signals can still be handles in this way. There are two primary types of
digital filters structures, Finite Impulse Response (FIR) and Infinite Impulse
Response (IIR) these are used to implement any sort of frequency response digitally.
(Wagner & Barr, 2012). Both of these will be explained at a later stage.
Delete
Figure 2.3.2: Half wave rectification of a raw sEMG signal (McDonough's, 2008)
16
The frequency range that is diminished is called the “stopband” and the frequencies
that are allowed to pass are called the “passband”. There are many types of filters,
the most common being low pass, high pass, band pass and band stop.
Low pass filtering removes any high frequency signals and it also removes any DC
offset. The most common values for the frequency cut-off lie in the range 5-20 Hz.
High pass filters removes low frequency noise from the signal by removing signals
with a frequency below a specified cut-off value and stops aliasing from occurring.
This means that it makes different signals within the data become more
distinguishable from one another thus resulting in a clearer signal. The high
frequency cut-off should be quite high so that rapid on-off bursts of the signal are still
clearly identifiable.
A band pass filter ensures that only output frequencies within a specified range,
which is usually quite narrow, are transmitted. Finally, a band stop filter, which
passes both low and high frequencies, blocks a predefined range of frequencies in
the middle. Figure 2.3.3 shows the transformation of a signal from low pass, through
to band-stop.
FIR Filters
Digital non-causal Finite Impulse Response (FIR) filters are often the recommend
method of filtration. (Merletti, 1999).These filters have an impulse response that is
finite because it settles to zero after a finite duration of time and requires no
feedback (a main advantage over IIR filters). The difference equation for a filter is
the formula which computes the output sample at time, n, based on past and present
output sample in the time domain. (Smith J. O., 2007) The equation for FIR is:
Figure 2.3.3 Examples of the four filter configurations (Smith S. W., 1999)
17
! " = $% ∙ '[" − *]
,
%-.
Where:
' " is the input signal
! " 	is the output signal
0	is the filter order: This determines the number of filter delay lines i.e. the number of
input and output samples that should be saved in order that the next output sample
can be computed.
$% is the feed-forward coefficient (Wickert, 2016)
2.3.2.1.1 FIR Filter Design
Within the field of FIR filters there are numerous techniques available to design them.
The following explains the four main techniques used in the filtering of signals:
1. Design by Windowing
These methods are often considered to be the most straight-forward methods of
designing FIR filters. They work by determining the infinite-duration impulse
response, by expanding the frequency response of an ideal filter in a Fourier series;
this response should then be truncated and smoothed using a window function.
The filters that fall under this category are usually considered to be relatively simple
because their impulse-response coefficients can be obtained in a closed form
solution and can be determined very quickly, even using a calculator. However, the
passband and stopband ripples of the resulting filter have to be restricted to being
approximately equal. (Saramaki, 1993)
An example of design by Moving Windows: Moving Average Filters .The output
signal can be computed using the following equation:
! = 1 =
1
3
'[1 + 5]
678
9-.
Where:
18
' " is the input signal
! " 	is the output signal
M is the number of points in the average, this can also be described as the window
size.
Based on a specified time window, a certain amount of data is averaged using a
sliding window technique. This filter is useful for reducing noise in a waveform.
When it is applied to a rectified signal, it is called the Average Rectified Value
(ARV).Figure 2.3.4 shows the result of an 11-point moving average filter (M = 11)
and a 51-point filter (M = 51).
In this example, a rectangular filter is buried in noise (see the original signal on the
left). As the number of points of the filter increases, the noise decreases, however
this results in the edges becoming less sharp. The moving average filter is the
optimal solution for this problem, it provides the lowest noise possible for a given
edge sharpness. (Smith S. W., 1999)
2. Least-Mean-Square Method
The Least-mean-square method is based upon the use of least-squared
approximation which is used to calculate the Steepest decent in statistics. The error
signal is the difference between the desired and actual signal. Since the signal
statistics are estimated continuously, the LMS algorithm can adapt to changes in the
signal statics, therefore LMS is an adaptive filtering method. (Lund University, 2011).
The method works by finding the filter coefficients that produce the least mean
squares of the error signal.
3. Maximally Flat FIR Filters
Figure 2.3.4: The results of an 11-point moving average filter (Smith S. W., 1999)
19
Maximally flat filters are a family of maximally flat and symmetric filters which are
known for the monotone and flat magnitude they exhibit. The advantage of these
filters is that they are simple to design and they are useful in application where the
signal is desired to be preserved with very small error near the zero frequency.
Much like the windowing method, their filter coefficients can be solved in a closed-
form solution, which is why they are relatively simple to compute. Furthermore, these
types of filters allow a passband with a smooth frequency response to be achieved.
4. Minimax FIR Filters
Minimax filters provide good control of the detailed frequency behaviour of filters,
and allow the number of independent filter coefficients required for optimally
designing an FIR filter to be reduced. This makes it a practical option for the filtering
of signals.
Infinite Impulse Response Filters
An IIR filter has an infinite impulse response and unlike a FIR filter, they have
feedback which makes them have much better frequency response. The following
difference equation can be used to compute the output signal:
! " = :; ∙ ![" − 1]
,
;-8
+ $% ∙ '[" − *]
6
%-.
Where:
' " is the input signal
! " 	is the output signal
0	is the feedback filter order
3	is the feedforward filter order
$% is the feed-forward coefficients
:; is the feedback coefficients (Wickert, 2016)
The feedback makes IIR filters prone to stability issues that the FIR filters do not
possess. Also in the case where the phase linearity is required, it is best to use an
FIR filter since IIR filters do not possess linear phase characteristics, (Milivojević,
2009), however in the case of sEMG and F/T signal processing, this is not an issue.
An advantage that IIR filters have over FIR is that they tend to meet a given set of
20
specifications with a much lower filter order than a corresponding FIR. (MathWorks,
2016).
According to MathWorks, the classical types of IIR filters used in EMG signal
processing are Butterworth, Chebyshev (types I and II), Elliptical and Bessel Filter.
(MathWorks, 2016). Each of these filters can be used in the lowpass, highpass,
bandpass and bandstop configurations.
1. The Butterworth Filter:
This filter is best used for its maximally flat response in the transmission passband,
minimizing passband ripple and is the most desirable filter for applications that
require the preservation of amplitude linearity. (Luca, 2003) The behaviour of a
Butterworth filter can be summarised by the frequency response function, which has
the following formula (Taha, et al., 2015):
<=(5?) A
=
1
1 +
?=
?B		
A,
Where:
<=	 is the Frequency Response
0	is the Filter Order. As the filter order increases, the amplitude of the signal
increases.
?=	is the Cut off frequency. This is the selected frequency that values either above or
below this will not be allowed to pass through in case of low or high pass filters; for a
band pass and band stop configurations, there are two cut-off frequencies and value
inside or outside of this range will not be allowed to pass through.
?B	is the Sampling Frequency which should always be at least twice the highest
frequency component that appears in the signal. (Welker, 2006). This can also be
described as the passband edge frequency.
2. The Chebyshev Filter
The Chebyshev filter is used to separate one band of frequencies from another. The
primary characteristic is their speed which is a result of using a mathematical
strategy that allows the rippling in the frequency response, which results faster rolling.
21
Type I Chebyshev filters are the most common types of Chebyshev filters which has
a squared magnitude of response can be determined by this equation (Matheonics
Technology Inc, 2009):
<(5C) A
=
1
1 + DAE,
A C
CB=FGH		
Where:
0	is the Filter Order.
CB=FGH = 2JKB=FGH,	 is the Constant scaling frequency, this is equal to the pass-band
edge frequency.
C = 2JK,		is the Angular Frequency
EMis the Chebyshev function of degree N
D	is the ripple factor
3. Elliptical Filters
Elliptical Filters have equiripple characteristics in both pass-band and the stop-band
which means that it has equal ripples in both passband and stopband (BORES
Signal Processing, 2014). The squared magnitude of response can be determined
by the following equation (Matheonics Technology Inc., 2009): 	
<(5C) A
=
1
1 + DAN,
A C
CB=FGH		
Where:
0	is the Filter Order.
CB=FGH is the Constant scaling frequency
C	 is the radian frequency
EMis the elliptical rational function of degree N
D	is the parameter that characterizes the loss of the filter in the pass-band
N,
O
OPQRST		
is an elliptic rational function of order N.
22
A study was undertaken by Sharma, Duhan and Bhatia in which they carried out the
filtering of a raw EMG signal using the Butterworth, Chebyshev I and Elliptic filters,
the results of this study are available in Figure 2.3.5 .
Each filter in this example was a low pass filter, but they each had varying input
parameters, which can be seen in Figure 2.3.6.
These figures show that in order to achieve similar levels of filtration, the input
parameters must differ when using different methods. For example, Chebyshev
filters uses higher passband and lower stopband frequencies than the Butterworth
filter to get the similar output signal.
Figure 2.3.5: Results of EMG filtering, clockwise, raw data, Butterworth filtration, Chebyshev I Filtration
and Elliptic filtration. (Sharmaa, Duhan, & Bhatia, 2010)
Figure 2.3.6 The input parameter used to filter the EMG signal
(Sharmaa, Duhan, & Bhatia, 2010)
23
4. The Bessel Filter
The Bessel Filter is a linear form of a filter that has a maximally flat phase delay,
which preserves the wave shape of the filtered signal in the pass band. It has a
smooth passband and stopband response, like a Butterworth. For example, for the
same filter order, the stop band attenuation of the Bessel approximation is much
lower than that of the Butterworth approximation. For a first order filter the magnitude
response is (Bond) :
<(5C) =
1
CA + 1
Where:
C is the Sampling Frequency
Normalisation
The amplitude and frequency characteristics of the raw sEMG signals are highly
sensitive to many factors. These factors include electrode configuration, electrode
placement and skin preparation; these factors then vary between individuals,
between days for the same individual and different electrode configurations. The
same applies to the F-T output signal which varies between the various positions
that the muscles are in during the test. Because this high sensitivity, it would not be a
valid practice to directly compare the signal of the single muscle from a single
subject to those of multiple subjects.
Therefore the signals need to be normalised, to allow the raw signal to have a
reference value to which it can be compared to. A “good” reference value is one that
has high repeatability, especially when using the same subject under the same
conditions. A reference value that is repeatable for an individual allows the
comparison between individuals and between muscles. (Halaki & Ginn, 2012).
The normalisation is usually done by dividing the EMG signal during the task by a
reference EMG value obtained from the same muscle. By normalizing to a reference
EMG value collected using the same electrode configuration, the factors that affect
the signal during the task and the reference contraction are the same; the relative
24
measure of the activation compared to the reference value. (Halaki & Ginn, 2012)
Methods	of	Normalisation	
There are multiple methods available for the normalisation of the sEMG signals, but
there is no consensus on which is the best method is use. The methods are
summarised as follows (Halaki & Ginn, 2012):
1. Maximal Voluntary Isometric Contractions
This is the most common method of normalizing EMG signals and uses the EMG
recorded from the same muscle during a maximal voluntary isometric contraction
(MIVC) as the reference value. The process works by identifying a reference test
which produces a maximum contraction in the muscle of interest. The test is
repeated multiple times, producing multiple sets of data. The maximum value from
the reference test is then used as reference values for normalizing all the EMG
signals. This allows the level of activity of the muscles of interest to be compared to
the maximal neural activation capacity of the muscle. (Halaki & Ginn, 2012)
2. Peak or Mean Activation Levels obtained during the task under investigation
This method normalises the data to the peak or mean activity obtained during the
activity in each muscle for each individual person separately. This method has been
shown to decrease the variability between individual compared to using raw EMG
data or when using MVIC’s to normalise. Furthermore, normalising to the mean
amplitude has been proven to be better at reducing variability between individuals
than normalising to the peak amplitude. (Halaki & Ginn, 2012)
3. Activation level during submaximal isometric contractions
Whilst being the most popular method for attaining a normalisation reference, using
maximal isometric contractions is not always a feasible method, for example, in
cases where the subject is not able to achieved their maximum effort contraction
because of physical limitations. Using submaximal isometric contraction resolves the
instability of the EMG signal at near maximal levels. Furthermore, previous studies
have demonstrated that using submaximal values produced reliability between days
compared to maximal loads. (Sousa & Tavares, 2012)
25
4. Peak to peak amplitude of the maximum M-wave(M-max)(EMG only)
This method involves the external stimulation of α-motor neurons. When a peripheral
motor has been stimulated at a point close to a muscle, it activates the muscle to
contract, which is called the M-wave signal. The amplitude of the stimulation is
increased until the peak to peak amplitude of the stimulation is increased by an
additional 30% which allows the maximum M-wave and maximum muscle activation
to be attained. The maximum M-wave value is then used to normalise the EMG
signals. However, this method is problematic because the accuracy of the M-max is
questionable. Its reliability is sensitive to various factors such as muscle length and
the task performed, however if these factors are controlled this method of
normalisation has the potential to facilitate the comparison between muscles,
between task and between individuals. (Halaki & Ginn, 2012)
The	Recommended	Normalisation	Method	
There are limited studies available which describe the techniques for normalising
Force/Torque signals in an intelligent automation context specifically. Therefore, for
continuity, the selected method for normalising the sEMG signals will be applied to
the F/T signals.
The literature written by Yuchen Zhao recommends using a normalisation method
Peak or Mean activation normalisation. The reason for this is that “the muscles
activates levels are not directly compared, but the activation patterns and their
corresponding force torque datum are of interest”. (Zhao Y. , Al-Yacoub, Goh,
Justham, Lohse, & Jackson). The reference value that should be used is the highest
value obtained from the rectified data for sEMG signal and the peak value obtained
from the raw F/T data.
Both Force-Toque and EMG normalisation should include other relevant information
such as joint angles and muscles length in isometric contractions and range of joint
angle, muscle length velocity of shortening or lengthening and load applied for non-
isometric contractions.
Principle Component Analysis
The final stage of processing in Principle Component Analysis. This stage removes
any unnecessary components, reducing the signal down to its basic component. The
number of principle components is either less than or equal to the number of original
26
variables. The principle components are the underlying structure of the data; where
there is the most variance. (Dallas, 2013)
The purpose of this stage is to allow the model to be used without a fixed electrode
placement, which will make the method more adaptable. In PCA, only co-variance
between the variables, the 8 channels of SEMG data are considered and re-ordered
from the most important components of the least important component. For
Force/Torque Signals 6 degrees of freedom are considered and re-ordered based on
their importance.
	PCA	Theory	
PCA is a linear transformation in which a new coordinate system is selected for the
data set such that the greatest variance by any projection of the data set comes to lie
on the first axis (this is the first principle component), followed by the nth greatest
variance on the nth axis (Neto, 2016). Once the components of the data set have
been re-ordered in this way, those with less importance can be eliminated.
The eigenvalues and eigenvectors of the covariance matrix for the dataset must be
found, in order to compute the importance of the components. The eigenvectors with
the largest eigenvalues correspond to the dimensions that have the strongest
correlation in the dataset, this is the principle component.
Let '8, 'A, … . 'M be a set "	0	×1 vectors and let X	be the average:
' =
'8
'A
⋮
'M
, X 	=
1
"
('8 + 'A + ⋯ + 'M)
Let [ be the 0×" matrix in columns '8 − X	, 'A − X	, … . 'M − X	
 = '8 − X| … |'M − X	
This process of subtracting the mean is the equivalent of translating the coordinate
system to the location of the mean. (Camps, R.S.Gaborski, & Seung, 2005).
Now to get the symmetric square covariance matrix, S:
^ =
1
" − 1
_
27
Then let `8 ≥ `A ≥ ⋯ ≥ `M ≥ 0 be the eigenvalues of in decreasing order with
corresponding orthonormal eigenvectors c8 ….	 cM .These eigenvectors are the
principle components of the data.
In a lot of cases, the largest few eigenvalue of are much greater than all the others,
which means that some of the first principle components explain a significantly
amount of the total variation of data, i.e. greater than 95% therefore, the remaining
components can be eliminated, this is Dimensional Reduction. (Jauregui, 2012).
Figure 2.3.7 displays an example showing the results of dimensional reduction
based on the results of PCA being carried out on a signal using a Matlab function.
All of the methods of signal processing described in this section, involve very
advanced and complex mathematics. However, those who will be collecting the
signals will not always understand this theory, therefore there needs to be a platform
available for them to carry out the post-collection processing. It is crucial that the
platform is sophisticated enough that it can compute the complex algorithms in the
background to produces the cleaner signals, whilst only requiring very limited user
input.
Artificial Neural Network (ANN)
The purpose of processing the signals using the previously described sections is to
produce clear signals that can be fed to an Artificial Neural Network. This stage is
used to firstly derive a relationship between the F/T and sEMG data; then this
Figure 2.3.7 A signal before (top) and post- dimensional reduction (bottom)
(MathWorks, 2016)
28
relationship can be used to predict the F/T output from the sEMG signal. (Liu,
Herzog, & Savelberg, 1999). An Artificial Neural Network is a biologically inspired
method of computing which is thought to be the next major advancement in the
computing industry and offers an initial understanding of the natural thinking
mechanism. It is an information processing paradigm that is inspired by the way in
which the biological nervous systems such as the brain process information and
learn from experience.
Figure 2.3.8 is a schematic diagram of a proposed ANN for hand force estimation.
Currently, a machines ability to learn from experience is surpassed even by animals.
Computers can do Rote learning which is a technique for memorising based on
repetition, which means they are skilled at things such as advance mathematical
functions.
But when it comes to recognising patterns, computers struggle to this let alone
reproducing those patterns in future actions. The human brain stores information as
patterns and utilizes it for things such as facial recognition. This process of storing
information as patterns, utilizing those patterns and then solving problems is a new
field of engineering and is not yet achievable in robots. (DoD DAC ).
Structure of an Artificial Neural Network
An Artificial Neural Network is comprised of a large number of highly interconnected
processing elements (neurons). This set of neurons is organizes into interconnected
layers along chosen patterns. Each neuron unit,1, receives some kind of stimuli as
an input from another unit or an external source. Each input, ';9, 5 = 1,2…has an
associated weight , ?;9. The neuron then processes this input and sends through its
related links into output neighbouring neurons. The output, !;, is computed using the
activation function, K, of weighted sum of inputs. This is represented mathematically
in the following equation (Mobasser & Hashtrudi-Zaad, 2005):
Figure 2.3.8 Schematic diagram of a proposed ANN (Mobasser & Hashtrudi-Zaad, 2005)
29
!; = K "de; = K( ?;9 ';9)
Modern ANN structures have moved away from the initial biological model to one
that works better with statistics and signal processing. There are several types of
ANN structures with variations relating to their topologies and search algorithms. It
is extremely important that these networks are able to adapt to new environments,
thus making them very reliant on learning algorithms. ANN models are also
characterised by their activation functions, number of layers, neurons and the
distribution of the connections.
A typical neural network can be seen in Figure 2.3.9. It is an adaptive system that is
comprised of the following four main sections:
1. A node that activates after receiving incoming signals(inputs);
2. Interconnections between nodes;
3. An activation function that transforms an input to an output, this is located
inside of a node;
4. An optional learning function for managing weights of input-output pairs.
(Tadiou, 2016)
Figure 2.3.9: (Tadiou, 2016)
30
Case study application:
This section describes the procedure that is carried out in order to collect the sEMG
and F/T signal. The procedure was observed, and data was collected from it for the
purpose of using it to test the software.
3.1 The Equipment
The Thalmic Labs Myo Armband Sensor
The Myo armband, see Figure 3.1.1 measures
electrical activity from muscles using EMG
sensors to detect five gestures made by the
hands. The armband consists of eight built-in
channel Medical Grade Stainless Steel EMG
sensors. It also uses a nine-axis Inertial
Measurement Unit (IMU) to sense the motion,
orientation and rotation of the arm using three
methods the accelerometer, the gyroscope
and the magnetometer. This data is collated to
get the joint state of the wrist, forearm and
upper arm. The signals are recorded for a specified duration of time and saved using
specialist software.
The xsens MTw Wireless motion tracker
This device is a highly accurate, completely wireless 3D human
motion tracker; see Figure 3.1.2.This uses an IMU sensor to
provide accurate measurements for the orientation, acceleration,
angular velocity and earth-magnetic field. For the sake of this
project, it will be fixed to a glove using it clipping mechanism,
see Figure 3.1.3. This will allow the motion of the wrist to be
measured.
Figure 3.1.1 (Drew Prindle, 2015)
Figure 3.1.3: The IMU sensor clipped
onto a glove
Figure 3.1.2: (We Are
Perspective, 2010)
31
The 6-axis ATI Force/Torque Sensor
This Force/Torque Sensor is a device that
measures the output forces and torque in
Newtons, from all three Cartesian coordinates(x,
y and x), six axes in total, thus producing 6
components of signals in total. The system is
made up of a transducer, high-flex cable, an
intelligent data-acquisition unit and an F/T
controller. It is a device most commonly used in
industry for product testing and robotic assembly
to ensure that the robotic arm only applies the force and torque necessary to
complete the application. (ATI Inustrial Automation, 2016). In this case study, it will
be used to measure the F/T output from the human arm that is applied during the
Peg-in-Hole experiment, by installing it on the base of the pegs used in the
experiment.
3.2 Experimental Method
The capture for the sEMG and Force/Torque data is split into two stages:
Primitive Calibration Stage:
This stage is the collection of reference signals that the operator should be able to
repeat, or at least get signals resembling it during the actual PIH experiment for both
the sEMG and F/T signals.
1. Sensor Calibration Stage:
This initial stage is to calibrate the IMU sensors. The aim of this is to ensure that all
three of the sensors are using one Global Reference Frame; in this case the chosen
reference frame is the one belonging to the black IMU sensor. The IMU sensors are
designed to use True North of the Earth as their reference, however, they do not
always stick to this due to variations in the magnetometer. The magnetometer in the
IMU is influenced by the ambient magnetic field, which causes the location of the
true north detected by the sensors to be inaccurate.
Figure 3.1.4 (ATI Industrial Automation, 2016)
32
The calibration of sensors is carried out by firstly
aligning them in the same direction on a flat
surface (table), see Figure 3.2.1. Then place the
sensors in the direction that the user is facing.
Here, the black IMU sensor is used as a parent
frame for the rest of the IMU’s.
Joint state data collection
Whilst the sensors will now be using one global reference frame, each individual
human body has its own reference frame. The purpose of this stage is to calculate
the joint state of elbow, shoulder and the wrist and in order to do this, the sensor
body frame readings must be known in terms of their orientations. The process of
collection is as follows:
1. Place the sensors on the arm at three different
locations of the limb, seen in Figure 3.2.2. It is
important the location of the sensors is the same with
every experiment. For example, the location for the
two Myo armband sensors can be selected by
choosing a specific distance away from the tendon in
the elbow. The position the centre sensors should be
then marked with a marker pen so that is can be put
in the same position every time. The IMU only
sensor is worn on glove, therefore it can be assumed
that it will be in same location every time.
2. The operator should make an ‘L’ pose as
shown in Figure 3.2.3. Now all the
orientations from individual frame will refer
to this reference frame. This should
produce the orientation of each sensor in
quaternions. This measurement will be
taken again at the end of the experiment to
ensure that the sensors remained calibrated throughout the experiment.
Figure 3.2.1 Sensor calibration
Figure 3.2.2 Sensors on arm
Figure 3.2.3 The ‘L’ pose
33
3. Now test the visualisation software to make sure that is in sync with the sensors.
The user should move their arm freely in space in various positions. If the
software is calibrated correctly, the 3-D model of the arm should move to the
same positions where the real arm is moved, see Figure 3.2.4.
4. Wait 5 minutes and re-check the visualisation software, checking that the 3D
model arm still moves in-sync with the real arm.
Getting reference data for sEMG signal
This stage involves collecting sEMG data from the two Myo armband sensors; so
that the final data collected at the end of the Peg-in-Hole(PIH) experiment can be
compared to these values for validation. The sEMG signals will be recorded when
moving the arm in two different ways: not holding the peg and holding the peg. This
is to show that that data collected is repeatable.
1. Hold the arm in front of the chest at a distance where the PIH set-up would be.
2. Position the hand in the same gesture that it would be if it were holding the peg.
3. Now move the arm freely in the space, roughly in the same area the PIH would
occupy. The shoulder and elbow joints must be relaxed, and the elbow joint
must rotate in the same way it would in the PIH experiment. The wrist should
also rotate the hand in the same area as it would move if it were carrying out the
PIH experiment.
4. Repeat step 8 twice, and record the sEMG signals for the duration of one
minute, using the signal acquisition software. The first results are for training
purposes and the second results are for testing purposes.
Figure 3.2.4 Calibration of software
34
5. Now repeat step 8 and 9 twice, this time whilst holding the peg. The motion of
the hand and arm should be the roughly the same as when the hand was not
holding the peg.
Getting reference for F/T signal
This stage is where reference signals for the F/T signal will be
acquired. A stationary peg that has been attached to F/T
sensors will be clamped to the table to ensure that it will not
move during the experiment, see Figure 3.2.5.
1. Hold the stationary peg using the same hand
configuration as when the peg was held, for example if
the thumb and index finger were used to hold the peg,
the same should be done to hold this stationary peg.
2. Push down at different radial increments. The elbow
should only go as far as it would during the PIH
experiment; when this point is reached, the operator should return to the starting
point, then start again. This step should be done for one minute, and signal
should be recorded at the same time
3. Repeat step 11 twice, as with the sEMG signal acquisition, the first results are
for training purposes and the second results are for testing purposes.
Peg-In-Hole Experiment
Humans can perform a large variety of seemingly
simple tasks, but these are often difficult for robots to
imitate. This is because whilst humans have learned
and possess institutive skills in both grasping and
performing assembly tasks, robots have not yet
acquired this ability yet. In an assembly task, there
are two primary subtasks, the first being the grasping
of objects and the second being the actual physical interaction of objects.
(Savarimuthu, Liljekrans, Ellekilde, Udet, Nemect, & Kriiger, 2013). The Peg-In-Hole
task is an example of this task and has been studied numerous times with differing
perspectives and objectives and is often used as an example of an assembly task.
Figure 3.2.7 (Zhao Y. , Al-Yacoub, Goh,
Justham, Lohse, & Jackson, 2015)
Figure 3.2.5: Stationary peg
with F/T sensor
Figure 3.2.6
35
(Bodenhagen, Fugl, Willatzen, Petersen, & Kruger, 2012). It has been selected as
the best activity to be carried out for the acquisition of the sEMG and F/T data.
The Force/Torque data is acquired using a 6-axis ATI force torque sensor which is
installed on a fixed plate.
1. Hold the peg using the same configuration that was used to hold the
stationary and freely moving peg before.
2. Perform the peg-in-hole task, by carrying out the following simple steps:
approaching, insertion, releasing and waiting. Do this for one minute, whilst
recording the signal.
3. Repeat step two 12 times, each time the peg should start at roughly the same
position.
4. This process should be done on two pegs of different diameters (15.8mm and
16mm). This will result in a total of 24 experiments being carried out.
5. Now check that the sensors stayed calibrated throughout the duration of the
experiment, by collecting the joint state data in the same way as before. If the
results are significantly different, the entire experiment will have to be
repeated.
The results of this experiment can now be uploaded to the software where it will be
processed so that they are suitable for putting into the ANN.
Figure 3.2.8: Peg-in-Hole experiment
36
Section 3: Software Design
4.1 Purpose of the software
The raw sEMG signals and F/T signals collected from the previously described
experiment are subject to a significant amount of noise and unnecessary
components. In order for the two to be mapped together and produce a predictive
model that is both accurate and reliable, the signals need to be processed so that
they are clear enough to be correctly interpreted.
4.2 Software Requirements:
“A user interface is well designed when the program behaves exactly how the user
thought it would” (Splosky, 2001). This means that the software carries out all the
actions that is has been designed to do as well as any actions that the user would
expect it to do. The software created in this product must be able to fulfil the
following requirement:
Stage 1-Signal Processing
Input 1: Surface Electromyography (sEMG) Signal
1) Input the raw data from the Surface Electromyography sensors
2) Rectify the sEMG signal
3) De-noise the signal using a suitable filtration method
4) Carry out the normalisation of the signal
5) Carry out the Principle Component Analysis
Input 2: Force/Torque Signal
1) Input the raw data from the from the Force/Torque sensor
2) De-noise the signal using a suitable filtration method
3) Carry out the normalisation of the signal
4) Carry out the Principle Component Analysis
Stage 2- Final Model Production and Artificial Neural Network
This stage is where the user will compile, view and save the resulting signals after
processing.
Input 1: Surface Electromyography (sEMG) Signal
37
1) Load and compile processed data onto same plot
2) Save the data from plot
Input 2: Force/Torque Signal
1) Load and compile processed data onto same plot
2) Save the data from plot
Stage 3: Launch Artificial Neural Network program
1. Launch Neural Network Fitting Application available in Matlab and close
current signal processing software.
A comprehensive flowchart that depicts the flow of the software been seen in
Appendix II 13.1.
38
Method Investigation
This section will describe and explain four different options available for the creation
of the signal processing software. Each method has been trialled by attempting to
process signals and create a basic Graphical User Interface(GUI). Finally, a matrix
has been created comparing each method against specific criteria, giving each
method a score which helped to make the final decision.
5.1 Matlab:
Matlab is a procedural programming language developed by Math Works, a
company who specialize in mathematical computing software. Matlab can be used
for mathematical functions, such as an advanced calculator, plotting of functions,
implementation of algorithm and the creation of user interfaces. Matlab also has the
capability to interface with many other languages including C, C++, and Java etc. It
is estimated that there are roughly 1 million Matlab users across the globe (EE
Times, 2004), and these users come from a variety of backgrounds in science,
engineering and economics.
The functions to carry out the signal processing are available in readily Matlab. This
can be can be used along with the Graphical User interface tool called ‘GUIDE’ to
create the User Interface.
Figure 5.1.1 shows some of the controls available to build the GU such as buttons
and text boxes. Figure 5.1.2 shows “Property Inspector’ which contains the editing
tools available to edit the controls.
Figure 5.1.1 UI building toolbox Figure 5.1.2: Property inspector
39
Trial of method
This user interface capabilities of Matlab had not previously been used, therefore it
was decide that the trial would only check how well this aspect worked first using a
simple function, rather than trying to combine the signal processing ability too since it
would have been to complex. For this example, the user is required to input a value
in the blue text box then press the “compute” button to create the 3D shade surface
plot (pictured). The plotting code is written in the Matlab editor, and this is how all
controls and functions on the user interface are programmed as well. After some
basic coding, the following user interface was produced:
The signal capabitles were checked by atempting to recitify raw sEMG data from the
experiment. The results are displayed as follows:
Figure 5.1.5: Raw data
The	user	input	a	value	
here
The	user	then	presses	
this	button,	which	
produces	the	3d	Figure 5.1.3: Trial of GUI capability
All	the	values	are	
now	positive,	
because	all	the	
negative	
components	from	
the	raw	data	
became	positive.
Figure 5.1.4: Results of sEMG data rectification
40
There are plenty of tools available to design and create the user inteface and the
GUIDE tool works quite well. Furthermore, Matlab has already got inbuilt functions
that allow the procesing of the signals.
However, the Matlab software automatically inputs coding known as “comments”,
which does not actually give instructions to the GUI . In this example the majority of
the code was produced by Matlab automatically after putting in each control, the
actually function code was only 3 lines. The actually project will require code that is
very long, so it could be too tideous to try and produce this when Matlab contributes
so much already. However, it is possible to disable this functionality so that the only
code is what actually gives instructions to GUI.
5.2 Visual Basic with C# Code
Visual Basic (VB) is a programming language and environment
developed by Microsoft. VB was one to the first products that
provides a graphical programming environment for developing
user interfaces. Users of VB can add code in by dragging and
dropping controls such as buttons and dialogs and then
defining the properties, rather than programmatically altering
the user interface. VB is an object oriented programming
language, which means that it is event-driven and therefore
reacts to events such as button-click.
The user interfaces within VB can be created and modified
programmatically for various languages such as C, C++, Pascal
and Java. It sometimes called a Rapid Application
Development (RAD) system because it enables users to quickly
build prototype applications. (Beal, 2015)
C# is a simple modern programming language that works in an
object orientated format. It is very similar to other languages
such as C and C++. (ecma nternational, 2006). Object-
orientated programmes are made up of two components the first being objects,
which are data structures which contain data in the form of fields, and the code
which gives the programme methods to carry out.
Figure 5.2.1: Events
41
Visual Basic has an extensive list of “Events” which could be executed in the
software. These ranged from simple ones such as edit-text, to those which are also
simple to implement, yet make the software look very advanced, such as mouse-
hover. A list of these events can be seen Figure 5.2.1.
There aren’t any functions readily available in VB that can carry out the signal
processing. However there are online libraries such as Git-Hub where it might be
possible to find code written in C# and store them. The code would then call these
stored commands and process the signals.
Trial of method
Designing and building a basic user interface was very easy using the software.
Visual Basic has the capability to build a Windows Form with various tabs.
Figure 5.2.3 and Figure 5.2.2 shows the attempt that was made. VB allows the
creation of multiple tabs which helps to keep the software concise. This feature can
be used to have several tabs where
the processing of the two different
types of signals would take place. The
user would be able to click the “Load
File” button which would allow then to
locate the text file counting the raw
signal data. This would then be plotted
and displayed in the window seen in
Figure 5.2.2
Figure 5.2.3: GUI trial
This is where the signal
could be displayed. After
each stage of processing is
selected, the plot will
automatically update itself.
Figure 5.2.2: GUI trial
42
For the purpose of demonstrating the potential design and layout of the software, the
signal displayed in Figure 5.2.2 was actually plotted using Matlab and loaded into the
UI as a picture. After carrying out extensive research into ways in which data could
be plotted and processed within a Visual basic windows Form, it was deemed too
complicate and near impossible to plot data, let alone carry out the signal processing.
5.3 Visual Basic with Matlab:
The following methods combine the signal processing functions in Matlab, with the
GUI tool available in Visual Basic.
Matlab Coder:
There is an application available in Matlab that allows converting the code written
there into C++ code. It generates readable a portable C and C++ code from Matlab
code, including the vast number of existing mathematical and graphical functions
available. (MathWorks, 2016).This method could be used to write the signal
processing functions within Matlab and use the Coder application to convert it to C++,
which can then be used in Visual Basic where there UI is being created.
Trial of Matlab Coder
This method trial by writing a simple code to load a matrix from a text file, whilst
ignoring the first column and row. This is the function to be converted into C++ which
can be used in visual basic.
When attempting to trial this
method, Matlab itself kept
crashing, see Figure 5.3.1
for a screenshot of this error.
After trying several attempts,
it was decided that if the
method was going to cause
so many issues even when
try to covert the most basic
function it was best to stop
the trials and look at other methods.
Figure 5.3.1 C coder crashing Matlab
43
Dynamic Data Exchange (DDE)
Dynamic Data Exchange is a method of transferring data between programmes that
was originally adopted by Microsoft Windows. “It sends messages between
applications that share data and uses shared memory to exchange data between
applications” (Microsoft, 2016). The DDE facilitates the “Client and Sever” methods.
Trial of DDE
Both Matlab and Visual basic support this method; therefore it is possible implement
many functions from Matlab in VB. (Cerqueira & Poppi, 1996). In this case, Matlab
will act as the server and VB is the client. The method would work by creating the
User interface in Visual Basic and sending the signal processing commands to
Matlab. The resulting plots will then be sent back to Visual Basic where it is
displayed to the user.
This method was research extensively using various online libraries and forums
giving expert advice. However, all the suggestions showed that the method would be
far too complex to carry out and is beyond the scope of this project.
5.4 Methodology Comparison
This section reviews each of the four options that were trailed, discussing their
advantages and disadvantages, and then finally presenting the final chosen method
based on evaluation of the trials.
Matlab Only
Although Matlab is not built for designing user interfaces, its capabilities are excellent.
One of its key advantages is its excellent built-in signal processing capabilities and
the fact that is already has all the functions required for each stage. This means that
additional API’s will not have to be sourced elsewhere.
Furthermore, Matlab has advanced mathematical capabilities and has advanced
graphical capabilities. This means that it will be possible to plot and display the
results of the processing, allowing the user to get a visual image of what is
happening to their data.
However there are several downsides to using this method, an extremely significant
one is that this method will be have be self-taught since there is no previous
experience or formal training has occurred in using this method to create software
44
prior to this project. This will significantly increase the time spent building the
software.
The code required to create even the most basic GUI is complex and extremely
lengthy as Matlab will generate quite a significant amount every time a new control is
added. This software also has a very limited range of events which could be
executed, which will limit how advanced and professional the final software will be.
Also, in-terms of aesthetics, Matlab has very limited tools dedicated to creating
graphics. For example, it is only possible to use block colours, they cannot be made
into gradients or patterns to make them more visually appealing. Finally, the only
way to check progress of the programme is to run the commands and see the results,
which slows down the development process.
Visual Basic Only
Visual Basic’s main function is to build GUI’s, therefore it has excellent User
interface building capabilities, with a wide range of tools available, ranging from
simple buttons and message boxes to more complex to programme tools such as
timers and performance counters. Many of these tools can be physically placed on
the Form with limited amount of additional programming required to control them.
Whereas many of these features have to be created programmatically in Matlab
such as help and message prompts to the user.
Furthermore, it is extremely easy to change the appearance of the form to make
features bolder and make the UI professional looking, without having to simply rely
on block colours. For example, VB allows pictures to be easily imported into “picture
boxes” on the form, which would allow the background to have a gradient and be
sleeker looking. Finally, this method was previous has been previously learned and
therefore it could be used with confidence as only limited amount of additional
learning would be required to achieve the desired software.
The obvious downside is the lack of readily available mathematical and graphical
functions, which does not allow the signal processing to take place by purely using
VB; it could create a really attractive GUI that does not compute the necessary
functions to meet the basic requirements of the software.
45
Matlab Coder-C++ Generator
The principle behind this idea it exactly what is needed to easily implement the signal
processing commands from Matlab into Visual Basic. The benefits of this method
would be that would allow software to be designed using VB’s excellent tools and
graphic making capabilities, whilst taking advantage of Matlab’s powerful
mathematical and graphical plotting capabilities. It would allow the software to
achieve the desired functions capabilities required and the professional aesthetics
and features.
The disadvantages of using this method, is the most significant belonging of any of
the four trialled methods. This was that the Coder application kept crashing when
attempting to convert the most basic Matlab functions into C code. Therefore, the
conversion process must be such a demanding process that it would take an
extremely long time to convert the potentially complex signal processing functions.
Dynamic Data Exchange
Like Matlab, this method has the potential to combine the excellent attributes of both
Matlab and Visual Basic. However after carrying out vigorous research into this
option and the methods of using it, it was decided that it required advanced
programming skills and knowledge beyond the scope of this projects.
5.5 Conclusion of Methodology:
In order to reach a decision on what method to use to create the software, a
comparison matrix was created which can be seen in Appendix II 13.5. Matlab
scored the highest and was therefore selected. To summarise, it was selected for the
following reasons:
ü It has excellent mathematical and graphical computational capabilities
ü It has graphical user interface generation capabilities that meets the
requirements of the software that will be designed
46
User Interface Design
The main aim of this this project is to design software that will allow users to easily
process their raw data signals; the keyword is “easy”. The software must be easy to
understand, navigate and control by a user who is not necessarily an expert in signal
processing. The user must be able to achieve a high level of signal processing whilst
having minimal input, meaning that the software does the majority of the work for
them. Even if the software is able to provide all the functions to carry out all the
stages, it would be rendered useless if the user does not know how to navigate the
software in order to do them. This section explains the research and process that
has been carried out to get the final design.
The design of the user interface must achieve two things:
1. Easy to use and navigate (Usability)
2. Aesthetically pleasing
6.1 Usability
Usability has many definitions, a more specific one is “the extent to which a product
can be used by specified users to achieve specified goals with effectiveness,
efficiency and satisfaction in a specified context of use” (Peuple & Scane, 2003). The
term usability it considered by many to far more quantifiable than the term “user-
friendly”. In his book Usability Engineering, Jakob Nielsen stated that usability can be
broken down into five key components:
I. Learnability: The software should be so easy to use that users can quickly
start to use it;
II. Efficiency: The software should be quick to use for example less keystrokes,
thus enabling a high level of productivity
III. Memorability: If the user should return to the software after a long period of
not using it, it should not be necessary for them to re-learn how to use it;
IV. Errors: The system should have as little errors as possible, it should be
possible to recover from errors and should prevent catastrophic errors from
occurring ;
V. Satisfaction: User should find the software subjectively easy to use (Peuple &
Scane, 2003)
47
Several methods that can increase the usability of the software will be explored,
these are:
i. Providing instructions on how to use the software before moving onto the
processing stages. This could be very brief or in- depth to describe each stage.
ii. Help notes that appear when the user hovers over the button belonging to the
functions they want to use.
iii. Limit the amount of input the user has, to avoid leaving room for errors. For
example, only allowing the user to carry out the stages in one order.
iv. Where the user is required to have more significant inputs, they must be told
what type and range for example when inputting the Cut-off frequency during the
filtration stage must be an integer in between 1-20 Hz. If they do not put a value
that meets the criteria they must be prompted of their error and be given the
chance to change it.
6.2 Aesthetics
Much like any other type of product, the aesthetics of the software are extremely
important for many reasons. Firstly, if the product is to be sold in industry, it needs to
be marketable. This means that it must have its own brand consisting of a distinctive
name, logo and tag line. Secondly, the aesthetics will influence how likely people are
to use it. Even though the look of the software does not affect its ability to carry out
the desired processes, it does affect how easily the user can operate it.
Layout
Several layouts for the software were designed; most of them focus on how to
display the resulting signal from the processes. These layouts were selected based
upon research done into the current scientific software user interfaces. The
compilation of this research can be seen in Appendix II 13.2 and 13.3.
1. Layout One
This layout has the plotted results displayed next to the buttons that the user presses
to carry out each process. It is a very simple display; see Figure 6.2.1: Layout one,
that will allow the user to avoid working through multiple windows which can be
tedious. However, having each process one window makes it possible for the user to
go through all the processes in the wrong order, thus producing the wrong results.
48
Also, there are various inputs that the user must give to the software and leaving
room on this single window will it extremely cluttered.
2. Layout Two:
This layout has the different stages carried out on multiple tabs that the user can go
through, see Figure 6.2.2: Layout two. However, for this to work, the software must
only allow the user to go through the processes in one order. This can be done
disabling the previous tab when the user starts a new process. This layout was not
used because Matlab’s GUI does not allow for the creation of tabs.
3. Layout 3
This layout has each process carried out on a different window, see Figure 6.2.3:
Layout three. Although this means that the user will have to go through multiple
windows, it is easier to organise all the controls such as buttons and user input text
boxes. Furthermore, this layout forces the user to only carry out the processes in a
single order, unless the capability to go back to a previous stage is provided through
a ‘Back’ button.
Figure 6.2.1: Layout one
Figure 6.2.2: Layout two
49
Colour scheme
Colour is an extremely important part of user interface design as they play a vital role
in how the user interacts with the software. If a colour scheme is chosen that
consists of various bright colours or only dark colours, it can make the user interface
very difficult to view and lettering difficult to read.
To select a colour scheme, Adobe Kuler was used. This is a website that allows
multiple colour schemes to be generated based up how well suited each colour is
with one another. Although many different schemes were generated using this
method, for the purpose of this report only three will be discussed. Each was trialled
on the welcoming and instruction page to show how well they worked and one was
selected.
Colour Scheme One:
This scheme, Figure 6.2.4 has a good range of colours from bold (orange) to more
typical colours used in scientific software (blues). The contrasting colours will help to
make features such as buttons and text boxes stand out.
Figure 6.2.3: Layout three
Veribus
Figure 6.2.4: Scheme one
50
Veribus
Figure 6.2.6: Scheme three
Veribus
Figure 6.2.5: Scheme two
Critical Review: The colours used in this scheme are too bright which makes it
uncomfortable to read the text on the screen. Furthermore, the use of these bright
colours makes the software look less professional.
Colour Scheme Two:
This scheme uses proffesional and corporate looking colours that are very similar
shades. None of the colours are too bright and overwhelming, and they all
complement each other well.
Critical Review: The colours do not contrast each other enough; therefore features
such as buttons will not stand out enough. Furthermore, the use of similar colours
creates a very monotonous display that lacks interest.
Colour Scheme Three:
This colour scheme consists of colours which contrasts each other and those that
51
complement each other. The use of bolder colours will help to highlight features and
communicate the different roles of these features. Furthermore, the use of corporate
blues will give the software a professional look.
Critical Review: Certain colours, namely the bright pink make the software look less
professional and may need to be toned down if this scheme is to be used.
Colour scheme number three was selected because of the professional look the
colour give the interface as well as the brighter colours which help to highlight
features such as buttons and user input text boxes. However, some of them will
need to be toned down to make the text more readable.
52
Description of Software
This section gives a brief overview of the final software, a page-by-page user manual
can be found in the The Veribus User manual which is a separate document.
Name of Software: Veribus (meaning Human Force in Latin)
Platform: Matlab
The completed software comprises of seven
windows, each of them connected to each other.
Firstly the software opens up to a welcome
screen , see Figure 6.2.1
7.1 Introduction to Software
The purpose of these pages, see Figure 7.1.1, is to describe to the user the stages
of signal processing that this software will allow them to achieve. It gives a short
description of each stage and what it will do their data signal.
There is some essential information that the user will need to know before they can
begin to use the software; these pages also give the user this information. For
Figure 6.2.1: Welcome page
Figure 7.1.1: information pages
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final
thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final

More Related Content

What's hot

Hand gesture recognition system(FYP REPORT)
Hand gesture recognition system(FYP REPORT)Hand gesture recognition system(FYP REPORT)
Hand gesture recognition system(FYP REPORT)
Afnan Rehman
 
Silent sound interface
Silent sound interfaceSilent sound interface
Silent sound interface
Jeevitha Reddy
 

What's hot (20)

Silent sound-technology ppt final
Silent sound-technology ppt finalSilent sound-technology ppt final
Silent sound-technology ppt final
 
Emotiv epoc introduction
Emotiv epoc introductionEmotiv epoc introduction
Emotiv epoc introduction
 
Silent Sound Technology
Silent Sound TechnologySilent Sound Technology
Silent Sound Technology
 
Silent Sound Technology
Silent Sound TechnologySilent Sound Technology
Silent Sound Technology
 
Hand gesture recognition system(FYP REPORT)
Hand gesture recognition system(FYP REPORT)Hand gesture recognition system(FYP REPORT)
Hand gesture recognition system(FYP REPORT)
 
silent sound new by RAJ NIRANJAN
silent sound new by RAJ NIRANJANsilent sound new by RAJ NIRANJAN
silent sound new by RAJ NIRANJAN
 
Silent Sound Technology
Silent Sound TechnologySilent Sound Technology
Silent Sound Technology
 
Silent sound technology final report
Silent sound technology final reportSilent sound technology final report
Silent sound technology final report
 
Brain Computer Interface & It's Applications | NeuroSky Minwave | Raspberry Pi
Brain Computer Interface & It's Applications | NeuroSky Minwave | Raspberry PiBrain Computer Interface & It's Applications | NeuroSky Minwave | Raspberry Pi
Brain Computer Interface & It's Applications | NeuroSky Minwave | Raspberry Pi
 
AlterEgo Device PPT
AlterEgo Device PPTAlterEgo Device PPT
AlterEgo Device PPT
 
Silent sound technology
Silent sound technologySilent sound technology
Silent sound technology
 
Silent sound technology
Silent sound technologySilent sound technology
Silent sound technology
 
Silent sound technology_powerpoint
Silent sound technology_powerpointSilent sound technology_powerpoint
Silent sound technology_powerpoint
 
EEG Game Simulator Using BCI & RaspberrPi
EEG Game Simulator Using BCI & RaspberrPi EEG Game Simulator Using BCI & RaspberrPi
EEG Game Simulator Using BCI & RaspberrPi
 
Alter ego - PPT
Alter ego - PPT Alter ego - PPT
Alter ego - PPT
 
Emotiv Analysis
Emotiv AnalysisEmotiv Analysis
Emotiv Analysis
 
Silent sound technologyrevathippt
Silent sound technologyrevathipptSilent sound technologyrevathippt
Silent sound technologyrevathippt
 
Silent sound technology
Silent sound technologySilent sound technology
Silent sound technology
 
Brain access
Brain accessBrain access
Brain access
 
Silent sound interface
Silent sound interfaceSilent sound interface
Silent sound interface
 

Viewers also liked

Dalhousie Parking Project Final Report
Dalhousie Parking Project Final ReportDalhousie Parking Project Final Report
Dalhousie Parking Project Final Report
Ian Milne
 
automatic car parking system
automatic car parking systemautomatic car parking system
automatic car parking system
sowmya Sowmya
 

Viewers also liked (6)

Dalhousie Parking Project Final Report
Dalhousie Parking Project Final ReportDalhousie Parking Project Final Report
Dalhousie Parking Project Final Report
 
Pinterest (MyTacks) - Software Engineering Management
Pinterest (MyTacks) - Software Engineering ManagementPinterest (MyTacks) - Software Engineering Management
Pinterest (MyTacks) - Software Engineering Management
 
investment options for retail investor when inflation s expected to ise
investment options for retail investor when inflation s expected to iseinvestment options for retail investor when inflation s expected to ise
investment options for retail investor when inflation s expected to ise
 
Final project report format
Final project report formatFinal project report format
Final project report format
 
automatic car parking system
automatic car parking systemautomatic car parking system
automatic car parking system
 
The software Implementation Process
The software Implementation ProcessThe software Implementation Process
The software Implementation Process
 

Similar to thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final

Proto Spiral.ppt Proto Spiral.ppt Proto Spiral.ppt Proto Spiral.ppt
Proto Spiral.ppt Proto Spiral.ppt Proto Spiral.ppt Proto Spiral.pptProto Spiral.ppt Proto Spiral.ppt Proto Spiral.ppt Proto Spiral.ppt
Proto Spiral.ppt Proto Spiral.ppt Proto Spiral.ppt Proto Spiral.ppt
AnirbanBhar3
 
Performance Evaluation of a Network Using Simulation Tools or Packet Tracer
Performance Evaluation of a Network Using Simulation Tools or Packet TracerPerformance Evaluation of a Network Using Simulation Tools or Packet Tracer
Performance Evaluation of a Network Using Simulation Tools or Packet Tracer
IOSRjournaljce
 
Describe The Main Functions Of Each Layer In The Osi Model...
Describe The Main Functions Of Each Layer In The Osi Model...Describe The Main Functions Of Each Layer In The Osi Model...
Describe The Main Functions Of Each Layer In The Osi Model...
Amanda Brady
 
Sean Barowsky - Electronic Normalizer
Sean Barowsky - Electronic NormalizerSean Barowsky - Electronic Normalizer
Sean Barowsky - Electronic Normalizer
Sean Barowsky
 

Similar to thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final (20)

Robotics and expert systems
Robotics and expert systemsRobotics and expert systems
Robotics and expert systems
 
Multifunctional Relay Based On Microcontroller
Multifunctional Relay Based On MicrocontrollerMultifunctional Relay Based On Microcontroller
Multifunctional Relay Based On Microcontroller
 
Av4103298302
Av4103298302Av4103298302
Av4103298302
 
Wireless Ad Hoc Networks
Wireless Ad Hoc NetworksWireless Ad Hoc Networks
Wireless Ad Hoc Networks
 
Seminar report on national instruments electronics workbench
Seminar report on national instruments electronics workbenchSeminar report on national instruments electronics workbench
Seminar report on national instruments electronics workbench
 
NUMERICAL STUDIES OF TRAPEZOIDAL PROTOTYPE AUDITORY MEMBRANE (PAM)
NUMERICAL STUDIES OF TRAPEZOIDAL PROTOTYPE AUDITORY MEMBRANE (PAM)NUMERICAL STUDIES OF TRAPEZOIDAL PROTOTYPE AUDITORY MEMBRANE (PAM)
NUMERICAL STUDIES OF TRAPEZOIDAL PROTOTYPE AUDITORY MEMBRANE (PAM)
 
Embedded system projects for final year Bangalore
Embedded system projects for final year BangaloreEmbedded system projects for final year Bangalore
Embedded system projects for final year Bangalore
 
Proto Spiral.ppt Proto Spiral.ppt Proto Spiral.ppt Proto Spiral.ppt
Proto Spiral.ppt Proto Spiral.ppt Proto Spiral.ppt Proto Spiral.pptProto Spiral.ppt Proto Spiral.ppt Proto Spiral.ppt Proto Spiral.ppt
Proto Spiral.ppt Proto Spiral.ppt Proto Spiral.ppt Proto Spiral.ppt
 
Performance Evaluation of a Network Using Simulation Tools or Packet Tracer
Performance Evaluation of a Network Using Simulation Tools or Packet TracerPerformance Evaluation of a Network Using Simulation Tools or Packet Tracer
Performance Evaluation of a Network Using Simulation Tools or Packet Tracer
 
Aplications for machine learning in IoT
Aplications for machine learning in IoTAplications for machine learning in IoT
Aplications for machine learning in IoT
 
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDS
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDSFACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDS
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDS
 
Research Presentation
Research PresentationResearch Presentation
Research Presentation
 
Describe The Main Functions Of Each Layer In The Osi Model...
Describe The Main Functions Of Each Layer In The Osi Model...Describe The Main Functions Of Each Layer In The Osi Model...
Describe The Main Functions Of Each Layer In The Osi Model...
 
Hardware-Software Codesign
Hardware-Software CodesignHardware-Software Codesign
Hardware-Software Codesign
 
MONITORING FIXTURES OF CNC MACHINE
MONITORING FIXTURES OF CNC MACHINEMONITORING FIXTURES OF CNC MACHINE
MONITORING FIXTURES OF CNC MACHINE
 
Sean Barowsky - Electronic Normalizer
Sean Barowsky - Electronic NormalizerSean Barowsky - Electronic Normalizer
Sean Barowsky - Electronic Normalizer
 
CA UNIT I PPT.ppt
CA UNIT I PPT.pptCA UNIT I PPT.ppt
CA UNIT I PPT.ppt
 
Scalable constrained spectral clustering
Scalable constrained spectral clusteringScalable constrained spectral clustering
Scalable constrained spectral clustering
 
Actor model in F# and Akka.NET
Actor model in F# and Akka.NETActor model in F# and Akka.NET
Actor model in F# and Akka.NET
 
PROJECTS FROM SHPINE TECHNOLOGIES
PROJECTS FROM SHPINE TECHNOLOGIESPROJECTS FROM SHPINE TECHNOLOGIES
PROJECTS FROM SHPINE TECHNOLOGIES
 

thedevelopmentofsoftwareinterfaceforautomationapplicationmay4final

  • 1. 1 MMC501 INDIVIDUAL PROJECT MEng Product Design Engineering DEVELOPMENT OF A SOFTWARE INTERFACE FOR AUTOMATION APPLICATION FINAL REPORT 2015/16 Michelle Okyere B126700 Supervisor: YM. Goh 2nd Reader: R S. Bhamra
  • 2. I STATEMENT OF ORIGINALITY “This is to certify that I am responsible for the work submitted in this report, that the original work is my own except as specified in references, acknowledgements or in footnotes”. PRINT NAME: MICHELLE OKYERE SIGNATURE: DATE: 03/05/2016
  • 3. II Acknowledgements I would like to say thank you to Dr Yee Mey Goh who first of all created this project for me so that I would have the opportunity to learn new software skills. She has supported me throughout the entirety of it allowing me to meet my aims. I would also like to say thank you to Yuchen Zhao, who has offered support and guidance at every hurdle that I faced throughout the duration of the project and has helped to deepen my knowledge in the field of Intelligent Automation. I would also like to thank Ran Bhamra for his valuable recommendations on improvements to the project that lead to it being a success. Finally, I would like to thank my parents, without their love and continued support; I could not possibly be successful.
  • 4. III Abstract The field of Industrial Robots has advanced significantly in recent years. Despite this, a robots ability to adapt to their environment in the way that a human does is very limited. Robot learning allows them to acquire skills or adapt to their environment. Ideally, it should be possible to extract the skill-based performance from human demonstrations to transfer those skills to robots. The lack of convenient and reliable force related measurements is slowing down the advancement of robot learning. A potential solution to this is Surface Electromyography, a non-invasive and inexpensive method of measuring surface muscular activities. The output signal has a strong and stable relationship to the force exerted by muscles. A model can be produced which allows the Force/Torque from each muscle movement to be predicted by the muscular activation signals. This information can be transferred to a robot, so that it can better imitate the motions produced by the human arm. However, due to the noisiness of the raw signals, they must be processed in four different stages. The purpose of this project was to design and build software that carries out all the stages of the signal processing in one programme. The result of the project is Veribus, a software application that allows the user to successfully carry out all the stages of said signal processing and save their results.
  • 5. IV Glossary Term Definition Action Potential The signal transmitted by a neuron when it goes from a resting state to stimulated. Algorithm The procedure or formula required to solve a problem. Aliasing A phenomenon in digital sound, in which static distortion occurs, resulting from a low sampling rate that is less than twice the highest frequency present in the signal. Alpha-Motor Neuron A motor neuron that sends messages from central nervous system to initiate and sustain voluntary, conscious movement. Attenuate (decibels, dB) The diminishing of signal strength during transmission. Analogue Signal A type if signal which varies continuously in frequency and amplitude. Application (App) A type of software that allows a specific task to be performed. Often referred to as an “App”. Application Program Interface(API) A set of routines, protocol and rules required for the building of a software application Closed form A mathematic expression that only contains a finite amount of symbols and only include commonly used operations Covariance Provides a measure of the strength of the correlation between tow of more sets of random variates.
  • 6. V DC offset The term used to described when a waveform has an uneven amount of signal in the positive and negative domains Decibels(dB) The measure of a sound level. Depolarisation A cell that is more positively charged on the inside than the outside Digital Signal A series of pulses confined to two states, ON or OFF. Represented in binary code as 0 or 1. Eigenvalue The scalar value associated with a given linear transformation of a vector space Eigenvector A special set of non-zero vectors associated with a linear system of equations. Event (in a software engineering context) An identifiable action that is carried out by the user of software such as clicking or by the system, such as an error message. Form (in a software engineering context) A platform used to build a user interface which proves a variety of controls such as text boxes and buttons. Fourier Series Used in Fourier analysis to represent an expansion of a periodic function. Impulse Response The output signal that results when an impulse is applied to the system input. Motor-Neuron A nerve cell which carries electrical signal to a muscle, triggering it to relax or contract. Network topology A schematic description of the arrangement nodes and
  • 7. VI connecting lines of a network Paradigms A framework containing basic assumptions, rules and methodologies generally accepted by a scientific community. Piezoelectric The appearance of an electrical potential across the sides of a crystal when it is subjected to mechanical stress. Transfer Function The relationship between the input and output signal. Truncated To cut data off abruptly, beyond a certain value.
  • 8. VII Contents Introduction ..........................................................................................................1 1.1 Aim................................................................................................................2 1.2 Objectives .....................................................................................................2 This aim can be achieved by meeting the following objectives: ..............................2 Literature Review .................................................................................................4 2.1 Robot Learning by Demonstration ................................................................4 Need for Programming by Demonstration .............................................5 Current State PbD .................................................................................6 2.2 Capturing Human Skills ................................................................................7 Methods of Capturing Human Skill ........................................................7 2.3 Signal Processing .......................................................................................14 Data Rectification.................................................................................14 Filtration ...............................................................................................15 Normalisation.......................................................................................23 Principle Component Analysis .............................................................25 Artificial Neural Network (ANN)............................................................27 Case study application: ......................................................................................30 3.1 The Equipment............................................................................................30 The Thalmic Labs Myo Armband Sensor ............................................30 The xsens MTw Wireless motion tracker.............................................30 The 6-axis ATI Force/Torque Sensor ..................................................31 3.2 Experimental Method ..................................................................................31 Primitive Calibration Stage: .................................................................31 Joint state data collection ....................................................................32 Getting reference data for sEMG signal ..............................................33 Getting reference for F/T signal ...........................................................34 Peg-In-Hole Experiment ......................................................................34 Section 3: Software Design................................................................................36 4.1 Purpose of the software ..............................................................................36 4.2 Software Requirements: .............................................................................36 Stage 1-Signal Processing ..................................................................36 Stage 2- Final Model Production and Artificial Neural Network ...........36
  • 9. VIII Stage 3: Launch Artificial Neural Network program.............................37 Method Investigation ..........................................................................................38 5.1 Matlab: ........................................................................................................38 Trial of method.....................................................................................39 5.2 Visual Basic with C# Code..........................................................................40 Trial of method.....................................................................................41 5.3 Visual Basic with Matlab: ............................................................................42 Matlab Coder: ......................................................................................42 Dynamic Data Exchange (DDE) ..........................................................43 5.4 Methodology Comparison ...........................................................................43 Matlab Only..........................................................................................43 Visual Basic Only.................................................................................44 Matlab Coder-C++ Generator ..............................................................45 Dynamic Data Exchange .....................................................................45 5.5 Conclusion of Methodology:........................................................................45 User Interface Design ........................................................................................46 6.1 Usability.......................................................................................................46 6.2 Aesthetics ...................................................................................................47 Layout ..................................................................................................47 Colour scheme.....................................................................................49 Description of Software ......................................................................................52 7.1 Introduction to Software ..............................................................................52 7.2 Signal Processing .......................................................................................53 Discussion-Evaluation of Software.....................................................................64 8.1 Improvements to Product............................................................................65 Project Management ..........................................................................................67 Conclusion......................................................................................................68 10.1 Meeting of Objectives .................................................................................68 10.2 Concluding remarks ....................................................................................70 Works Cited .......................................................................................................I Appendix I..........................................................................................................I 12.1 Objectives Form.............................................................................................I 12.2 Initial Gantt Chart ..........................................................................................II
  • 10. IX 12.3 Actual Gantt Chart ......................................................................................III Appendix II.........................................................................................................I 13.1 Software Design Flowcharts ..........................................................................I Overall flow of software ..........................................................................I sEMG Signal Processing ONLY .............................................................I Force/Torque Signal Processing ONLY.................................................II 13.2 User Interface Design Mood Board..............................................................III 13.3 User Interface Design-Review of Existing Software ................................... IV 13.4 Advanced User Interface Designs ............................................................... V 13.5 Method Comparison Matrix......................................................................... VI 13.6 Force/Torque Signal Processing Stages ......................................................7
  • 11. 1 Introduction Industrial Robots are an integral part of automation systems since they are now involved in the manufacturing, assembly, packaging, inspecting and many other aspects of the production and service industries. Technological advances have allowed the development of intelligent automation to replace manual task, thus leading to higher efficiencies, improved and more consistent quality of products and safer working conditions for employees who can now avoid unsafe working environments. An excellent example of this can be seen at Ford Motor Company’s white assembly plant in Changan, China, see Figure 1.1.1. The assembly line here is highly automated and the robot’s carry out multiple processes such as laser welding, spot-welding and palletizing, of which would have been previously carried out by humans. (Chen, 2013). Despite these advancements, a robots ability to adapt to its environment in the way that a human does is very limited, as a result, robots are currently only able carry out repetitive and uncomplicated tasks. However, if this aspect was to be improved, it would open up the application of robotics to carry out more complex tasks usually carried out by humans. Robot learning is a field that is searching for ways in which robots can acquire skills or adapt to their environment through learning algorithms. These skills include grasping and joint manipulation. (Monfared, Automation Processes and Advanced Technologies, 2015). If successful, the manufacturing and assembly industries could stop relying so heavily on manual work processes. Trained humans are intrinsically good at handling complex situations. If the skills from human can be numerically represented and understood by robot, even though if it is not a hundred percent duplicated by the robot, it will become a strong source of prior input knowledge. (Zhao Y. , Al-Yacoub, Goh, Justham, Lohse, & Jackson, 2015). Figure 1.1.11 Ford’s Car Assembly Line in using ABB robots (Chen, 2013)
  • 12. 2 PhD student Yuchen Zhao has been working on achieving this as a part of his work. Surface electromyography (sEMG) is a method of recording the muscle electrical activity performed by muscles. Zhao has been be recording these signals as well as the Force/Torque (F/T) output produced by the muscles, with the purpose of mapping them together. The aim is to produce a model that can predict the Force/Torque output from the muscle activity, and then transferring this data to a robot so that it can mimic the actions of a human more accurately. A key issue of this project is that both of these signals are riddled with noise and unnecessary data when first collected. In order to produce an accurate and reliable predictive model, the signals are required to go through multiple stages of signal processing in order to remove the noise and excessive components. 1.1 Aim The purpose of this project is to design software that carries out all the stages of the signal processing on a single platform. The central mission is to make the process of human skill capture and transfer to robots easier by using F/T trajectories to control robots. 1.2 Objectives This aim can be achieved by meeting the following objectives: Primary: 1. To attain a good understanding of the technical measuring process for capturing human skills; 2. To attain a thorough understanding of the software interface requirements and select appropriate programming languages; 3. To write the code for software which interfaces with the relevant applications and processes the measurement data; 4. To create the user interface which allows the user to control key parameters of the software; 5. To carry out tests on the software and optimise the software.
  • 13. 3 Secondary: 1. To explore options for further development of the software and its potential other uses. In order to fulfil these objectives, it is necessary to outline specific deliverables: Primary: 1. Carry out an in-depth literature review 2. Carry out signal processing using various methods, to get a better understanding 3. Research and find methods to evaluate 4. Trial and review each method 5. Write code to control the user interface and carries out the various stages of signal processing 6. The software must be able to open ANN app within Matlab 7. Design user interface(s) that is linked to the code and allows the user to carry out stages on their own data 8. Allow users to input their own data and review the performance of the program 9. Keep making the necessary changes to ensure that the software has no bugs and performs as desired. Secondary: 1. Explore software’s ability to allow them to change the parameters of the signal processing stages 2. Allow user to repeat experiment multiple times and save their data at the end 3. Allow the user to convert their raw data into the format that can be read by software
  • 14. 4 Literature Review This section is an in-depth review into three key topics that need to be understood in order to meet the objectives stated. Relevant information from various sources for each of them has been compiled and studied to develop a comprehensive understanding. 1) Robot Learning by Demonstration: This is the industry that this project aims to help to advance, particularly the Programming by Demonstration area. 2) Capturing Human Skills: The purpose of this project is to allow the skills used by humans whilst performing certain tasks to be reliably transferred to a robot; therefore it is crucial that the skills and methods of recording them are understood. 3) Signal Processing: The skills captured from humans are produced in the form of signals that have a large amount of noise, which will need to be removed a reliable model can be created. This can be achieved through various methods of signal processing which need to be explored before selecting the most appropriate ones. 2.1 Robot Learning by Demonstration According to Biggs and MacDonald, industrial robot programming methods can be split into two classes: Manual and Automatic programming, see Figure 2.1.1. Manual systems require the programmer to directly enter the desired behaviour of the robot using graphical methods such as ladder logic diagrams, or through the use of text- based programing languages. By contrast, in Automatic programming systems, the program used to control the robot’s motions is automatically created; therefore the user is controlling the robots behaviour but has no influence on the programming code. (Biggs & MacDonald, 2003). Figure 2.1.1: Manual (left) and Automatic (right) methods (Biggs & MacDonald, 2003).
  • 15. 5 Programming by Demonstration (PbD) is an automatic programming method and will be the focus of this report. This concept was first conceived in the 1980’s and was inspired by the way humans learn new skills by imitation. The idea attracted researchers within the field of manufacturing robotics as a way of automating the tedious manual programing of robots and reducing the costs associated with the development of robots within a factory. The aim of PbD is to extract the skill based performance from human demonstrations and transfer those skills to robots. The demonstration can be performed in two ways, the first is the traditional method where the robot is manually guided using a remote control such as a teach pendant. This is a relatively common method that is easy to learn. It is often adopted by shop-floor workers who would use it to control robotic processes such as the programming of a robot welder. The second is a more natural method that utilizes gestures and voice. It is done by performing the task with absolutely no interaction with the robot, but recording the movements of the operator using motion-capturing system. The latter method is more advanced and flexible, and will be the area that this project aims to support. Need for Programming by Demonstration The importance of PbD has grown because the conventional methods of programming a robot will become unfeasible when situations become too complex which results in being reliant on human labor. Industrial robots are becoming equipped with increasingly advanced technologies and capabilities such as laser welding, as well as more complex hardware such as multiple sensor modalities, programming a robot has become extremely complex. (Monfared, Automation Processes and Advanced Technologies, 2015). For example using languages such as C programming is too complex for everyone to learn, but the solution to this, using teach pendants, disrupts production. In industrial robotics, the goal is not only to reduce costs, but also to create or assemble products far more efficiently than human operators could achieve. PbD is deemed to be particularity useful when used to teach Service Robots used in Human-Robot collaboration scenarios, for example, product inspection, where the robot might be used to carry heavy parts. In this case, PbD goes beyond transferring
  • 16. 6 skills, but moves towards finding ways for the robot to interact safely with humans. The robot needs to be able to recognize human motions and predict their intentions. The need for PbD becomes has become even more crucial with the introduction of Humanoid robots. These robots offer many benefits, namely being far more flexible than the typical industrial robot, because of the multiple degrees of freedom they have as a result of being designed based on the human form. However, their introduction into industry presents even more challenges in terms of learning and communication. In comparison to industrial robots which are often only required to carry out tasks in static environment, humanoid robots are expected to carry out tasks in dynamic situations. These robots need to adapt to new environments, therefore the algorithms used to control them need to be flexible and versatile. Due to the continuously changing environments they operate in and the huge amount of tasks that the robot is expected to perform; they need to be able to constantly learn new skills and adapt their existing skills to new contexts. As a result, the humanoid is expected to go even further and behave in a human-like manner with regards to social interaction, gestures and learning behaviors. (Calinon, 2009). PbD could allow this as it is a method designed to enable robot adaptability to new environments and situations. Current State PbD Several programming systems and approaches based on human demonstration have been explored since the introduction of PbD. However a key stage in this area is capturing the skills a human uses to perform certain task in a way that they can be transferred to a robot and allows them to learn. Currently, the skill based tasks performed by a human operator is difficult to extract and generalise so that a robot can understand and replicate the tasks. Researchers are currently utilizing sensors such as magnetic markers and force plates to record the trajectories of the human operator and encoding these sequences into models which are then transferred to the robot for the robot to replicate the movements. (Zhao Y. , Al-Yacoub, Goh, Justham, Lohse, & Jackson).Typically Industrial robots today have Position and Force control at the end-effector. However these tools are bulky and impractical when it comes to performing tasks on-line. There is need for
  • 17. 7 more agile methods that allow the process of skill capturing to be more flexible and precise. 2.2 Capturing Human Skills Throughout the Intelligent Automation industry, there is a growing demand for methods to capture and classify the explicit and tactile skills used by human operators whilst carrying out complex tasks. (Everitt, Fletcher, & Caird-Daley, 2015). There are numerous reasons for wanting capture human skills. A common reason is to understand the processes that intervene between stimuli and response, which allows one to explain behaviour. Another reason is to explain a particular group of human behaviour such as event detection or to describe the general mechanisms that provide the basis for all behaviours such as working memory. Describing the relationship between stimuli and responses also makes it possible to design stimuli that produce the desired response. If the input-output relationships for particular types of task can be measured, it makes it possible to adjust the task parameters in a way that will produce a better output. It also provides the basis for training others to exhibit the desired behaviours. In this case, the input-output relationship must be understandable and executable by other humans. The final reason is to capture human skills for the purpose of embedding the information in a machine. (Rouse, Hammer, & Lewis, 1989). This will make it possible to build an autonomous machine that can replace human activity in areas that may be dangerous or when carrying out mundane but complex tasks. For example, if the Force/Torque input from a human muscles can be measured whilst they carry out a delicate assembly task, it might be possible to programme the robot to apply this gentle force/torque, thus improving the quality of the product. Methods of Capturing Human Skill There are several methods that have been explored in capturing human skill, but before selecting the most appropriate method, it is important to classify the task. The tasks can fall into one of two categories: Discrete and Continuous tasks. A discrete task is one that has a fixed beginning and end, for example, switching a button on or off. A continuous task has no clearly defined beginning or end and often has an objective to maintain a status in opposition to confounding influences for example, steering a car. Previous research has found it extremely difficult to devise a skill
  • 18. 8 capturing method that will accommodate both the fluid nature of continuous task as well as the rigid nature of the discrete tasks. (Everitt, Fletcher, & Caird-Daley, 2015). The options available of capturing these tasks are described as follows: 1. Computer Vision-based methods In general computer-vison (also referred to as Machine Vision) cameras are used to optically sense the presence and shape of an object and the image is then processed. The image acquisition is the first stage, where cameras, lenses and lighting been designed for the purpose of providing the differentiation required for subsequent processing are used to capture the processed being tracked. A microprocessor is used which processes the image, usually within less than one second; this image is then measured and the measurements are digitized. The microprocessor uses various methods for image processing including edge detection and neural network. (Monfared, Automation Processes and Advanced Technologies, 2015). Based on the results of the image processing, a decision is made, for example, whether a part is faulty or acceptable. Although this method is often adopted in assembly lines for quality inspection amongst other applications, it can also be used for human tracking and gesture recognition for the purpose of robot learning. The application of computer vision in human skill capture comprise of three stages: detection, tracking and recognition. Detection involves defining and extracting visual features that belong to the body part in question, the hand for example, in the field of view of the camera(s). During the tracking stage, sequential data association is performed between successive image frames. Thus, at each moment in time, the system will be aware of the presence and location of objects. Tracking also allows the estimation of model parameters, variables and features that were not observable at a certain moment in time. Finally, the recognition stage is the interpretation of the semantics that the hand location, posture and gesture convey. (Zabulis, Baltzakis, & Argyros, 2009) The data provided from this process can be transferred to a robot for it to repeat the motions of the human limb. For example, an anthropometric robotic hand can replicate the gestures performed by a human hand whilst picking up an object and moving it another place.
  • 19. 9 2. Data Gloves A Data Glove is an interactive electromechanical device worn on the hand, which facilitates tactile sensing and fine-motion control in robotics and virtual reality. Tactile sensing involves the continuous sensing of variable contact forces, using an array of sensors. These sense the force being applied using strain gauges, piezoelectric devices or magnetic induction. (Monfared, Automation Processes and Advanced Technologies, 2015). Fine motor control involves the use of sensors to detect the movements of the wearer’s hands and fingers, and the translation of these motions into signals can be used for a robotic hand. (Rouse M. , 2005). The gloves typically comprises of a cloth material, with sensors sewn at each degree of freedom, see Figure 2.2.1 . They can be used to measure the responses from various hand activities such as grasping. A study took place at the Learning Algorithms and System Laboratory (LASA) , explored a new setup for a “sensorized” data glove. This showed that data gloves make it possible to measure interaction forces of the hand as well the wearer’s behaviours such as using their fingers in oppositions.. (R.L, Khoury, J, & A, 2014). This information helps to provide more information about human grasping and manipulation skills, which will allow them to be transferred to an anthropomorphic robotic hand. This data provides a more complete picture for study human grasping and manipulation; these skills can then be transferred to anthropomorphic robotic hand 3. Biosignals Biosignal are the signals (electrical and non-electrical) produced in living beings that can be measured and monitored. There are two classes of Biosignals: Permanent and Induced. Permanent Biosignals are those which always exist, even without excitation from outside the body. Induce signals however, have to be triggered Figure 2.2.1 A Cyberglove data glove with Teskcan Tactile sensors (R.L, Khoury, J, & A, 2014)
  • 20. 10 artificially and only exist at the time of excitation. There are multiple types of Biosignal, some of the best known are: 3.1.Electromyography A motor unit is comprised of a single alpha motor neuron and muscle fibres. The motor neuron supplies the muscle with action potential, and when this reaches a depolarization threshold, the muscle contracts. This depolarization produces an electromagnetic field which is measured as a very small voltage, the EMG signal. The signal reflects a strong and stable relationship to the force exerted by muscles due to the electrical activities of the motor units. Surface Electromyography (sEMG), is a non-invasive and inexpensive method of measuring EMG signals, where electrodes are place over the skin over the muscle being measured. Figure 2.2.2 is a schematic diagram of the typical set-up of EMG signal acquisition. The signal itself tends to be quite complex due its sensitive nature which makes it easily affected by the anatomical and physiological properties of the muscles and the instrumentation, used for the detection and recording of the signal. (Motion Lab Systems, Inc, 2016).There is four types of noise sources that influence the output raw signal: 1) Inherent noise from the electronics parts inside of the signal detection and recording instruments used to collect the data; 2) The ambient noise from the electromagnetic radiation in the environment; Figure 2.2.2 A schematic diagram of the sEMG set-up (Hossain, 2015)
  • 21. 11 3) The motion artefacts with electrical signals mainly in the frequency 0-20Hz range from the electrode-skin interface; 4) The inherent instability of the EMG signal with unstable components in the 0- 20 Hz range that occurs de the quasi-random nature of the firing rate of the muscular motor units. (Wang, Tang, & Bronlund, 2013) As a result, the signal must first be processed to remove the excess noise and render them suitable for analysis and interpretation. Despite its sensitivity, the signal is still being used in many fields such as assistive technology, rehabilitative technology, armbands for mobile devices and muscles computer interfaces. A benefit of sEMG is that the sensors can be easily worn by the user and are relatively cheap, thus making them very useful in capturing human skills. 3.2.Mechanomyography Mechanomyography (MMG) is a mechanical signal that can be detected when the muscle does any activity; it been described as the mechanical counterpart to EMG signals. The technique takes place using specific transducers to record muscle surface oscillations that occur due to mechanical activity of the motor units. MMG signals can be detected using several types of transducers including piezoelectric contact sensors, microphones, an accelerometer and laser distance sensors. Figure 2.2.3 shows the typical arrangement of MMG measurement using an accelerometer as the transducer. Figure 2.2.3. The typical set of MMG signal acquisition (Islam, Sundaraj, Ahmad, & Ahamed, 2014)
  • 22. 12 MMG offers some notable advantages over sEMG, the first being due to its propagating property through the muscle tissue, the MMG sensors do not need to be placed in a precise or specific location. A more significant advantage is it is not affected by change in the skin impedance due to sweating, because it is a mechanical signal. (Islam, Sundaraj, Ahmad, & Ahamed, 2014).Both of these reasons makes the acquisition of MMG signals far easier than acquiring sEMG signals. However, the raw signal is still subject to noise and must therefore be processed before it can be analysed and interpreted. 3.3.Electroencephalography Electroencephalography (EEG) signals provide information about the spontaneous electrical activity of the human brain. EEG signals are very popular in the analysis of brain activity and determining the state of a human being. During the EEG procedure, multiple small sensors are attached to the scalp, see Figure 2.2.4, to detect the electrical signals produced when a brain cells send messages to each other. (Trust, 2015). Each signal is amplified and digitalized; it is then stored electronically. During the recording of the signal, a series of procedures takes place to induced normal or abnormal EEG activity such as eye closure, mental activity and sleep. shows two EEG signals, the top is one of a normal signal and the bottom is the signal that is produced when the patient closes their eyes. Figure 2.2.4 Left: An EEG cap with multiple electrodes. Right: EEG readings on a monitor.
  • 23. 13 The raw EEG signal usually has numerous undesirable characteristics such as being complex, noisy and non-stationary. Therefore, it requires specific signal processing before it can be properly interpreted. 3.4.Electrocardiography Electrocardiography (ECG) signal corresponds to the electricity activity of the heart. When an electrical potential is generated in a section of the heart, an electrical current is conducted to the body surface in a specific area. The ECG records changes in magnitude and direction of this activity. The recording takes place by placing electrodes over standard positions; typically the patient’s chest and limbs see Figure 2.2.5 . These electrodes detect the changes in the aforementioned current. The voltages are then amplified and recorded on ECG paper as wave and complexes (Stansbury, Brufton, Richardson, & Lyons, 2015). The raw signal is influence by several factors which results in a very noisy signal. Sources of noise include lung sound and breath as well as the electrode contact noise. Therefore the raw ECG signal must be processed before they can be properly interpreted. Each of the described skill capturing methods have output signals that have a large amount of noise and must therefore go through post-processing. If not, the interpretation of the signal analysis may be incorrect. This is why there is a need for software’s designed to reduce or eliminate noise, in a way that is user friendly. When trying to capture human skills for the purpose of transferring them to robots, the types of signals that are used are typically those which correspond to the electrical actively of muscles movement. Therefore, since EEG and ECG methods do not fulfil this requirement, they are not used in the human skill capture for intelligent automation purposes. Figure 2.2.5 (Bupa's Health Information Team, 2010)
  • 24. 14 Figure 2.3.1: The top is the raw signal, the bottom is the result of full-wave rectification. (Konrad, 2006). If the skills can be extracted through means that require far less effort, it will make robot learning far easier. In an industrial situation, for example a car assembly line, it is necessary to track the force and torque output provided by the humans, and then teaches the robot using this information. However, it is not feasible to place Force/Torque sensors on each component of the car since they are bulky and not dedicated. Therefore, there is a need have a reliable Force/Torque prediction whilst carrying out these assembly tasks and solution is measure muscle activity since the muscle generates the force, hence why the EMG signal has been selected. The detection of sEMG is far easier since it only requires the use of lightweight and portable sensors. There is a need to enable the accurate prediction of the Force/Torque output from sEMG signal. Signal processing is a vital part of producing a predictive model that is accurate and usable; hence the need for a platform that allows this processing to be achieved accurately and quickly. Once this has been achieved, this would remove the need for bulky sensors during human demonstrations and makes the skill extraction and post-processing process far easier, thus creating a more flexible human-robot control interface. 2.3 Signal Processing As mentioned previously, the processing of raw data signals is crucial in producing an accurate predictive model for the Force/Torque output. There are many methods of processing available, but this report will focus on the four methods recommended by Zhao. Data Rectification Rectification is the first stage of processing the raw data but it is only carried out the on the sEMG signals since it is far noisier than the Force/Torque raw signals. This stage translates the raw sEMG signals into a single polarity, which is usually means that all the negative sEMG values are transformed into positive ones. This stage is necessary
  • 25. 15 because the average of a raw sEMG signal is usually zero, therefore when an attempt to smooth it takes place the result is just zero. (Rose, 2011) This stage occurs by calculating the mean of the signal, it is then integrated and the Fast Fourier transformation takes place to calculate the discrete Fourier transform. This means that the signal is being transformed from its original domain, this case time, into the frequency domain. There are two kinds of rectification, the first is Full-wave which works by adding the parts of the EMG signal that is less than zero to the value that are greater than zero, thus resulting a compete positive signal. This is usually the preferred process and an example can be seen in Figure 2.3.1. Whereas Half-wave rectification discards all data less than zero. (McDonough's, 2008). This process is shown in Figure 2.3.2, where the dotted lines represent the value less than zero which are deleted during the process. Filtration Due to the both signals sensitivity to various factors during the collection, it is subject to a significant amount of noise that needs to be removed before it can be interpreted correctly. Filtering is process in which frequencies of a specific range are diminished whilst allowing to others to pass, thus limiting the frequency spectrum of the output signal. Digital Signal Filters operate on discrete-time signals, despite being an analogue signal, sEMG signals can still be handles in this way. There are two primary types of digital filters structures, Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) these are used to implement any sort of frequency response digitally. (Wagner & Barr, 2012). Both of these will be explained at a later stage. Delete Figure 2.3.2: Half wave rectification of a raw sEMG signal (McDonough's, 2008)
  • 26. 16 The frequency range that is diminished is called the “stopband” and the frequencies that are allowed to pass are called the “passband”. There are many types of filters, the most common being low pass, high pass, band pass and band stop. Low pass filtering removes any high frequency signals and it also removes any DC offset. The most common values for the frequency cut-off lie in the range 5-20 Hz. High pass filters removes low frequency noise from the signal by removing signals with a frequency below a specified cut-off value and stops aliasing from occurring. This means that it makes different signals within the data become more distinguishable from one another thus resulting in a clearer signal. The high frequency cut-off should be quite high so that rapid on-off bursts of the signal are still clearly identifiable. A band pass filter ensures that only output frequencies within a specified range, which is usually quite narrow, are transmitted. Finally, a band stop filter, which passes both low and high frequencies, blocks a predefined range of frequencies in the middle. Figure 2.3.3 shows the transformation of a signal from low pass, through to band-stop. FIR Filters Digital non-causal Finite Impulse Response (FIR) filters are often the recommend method of filtration. (Merletti, 1999).These filters have an impulse response that is finite because it settles to zero after a finite duration of time and requires no feedback (a main advantage over IIR filters). The difference equation for a filter is the formula which computes the output sample at time, n, based on past and present output sample in the time domain. (Smith J. O., 2007) The equation for FIR is: Figure 2.3.3 Examples of the four filter configurations (Smith S. W., 1999)
  • 27. 17 ! " = $% ∙ '[" − *] , %-. Where: ' " is the input signal ! " is the output signal 0 is the filter order: This determines the number of filter delay lines i.e. the number of input and output samples that should be saved in order that the next output sample can be computed. $% is the feed-forward coefficient (Wickert, 2016) 2.3.2.1.1 FIR Filter Design Within the field of FIR filters there are numerous techniques available to design them. The following explains the four main techniques used in the filtering of signals: 1. Design by Windowing These methods are often considered to be the most straight-forward methods of designing FIR filters. They work by determining the infinite-duration impulse response, by expanding the frequency response of an ideal filter in a Fourier series; this response should then be truncated and smoothed using a window function. The filters that fall under this category are usually considered to be relatively simple because their impulse-response coefficients can be obtained in a closed form solution and can be determined very quickly, even using a calculator. However, the passband and stopband ripples of the resulting filter have to be restricted to being approximately equal. (Saramaki, 1993) An example of design by Moving Windows: Moving Average Filters .The output signal can be computed using the following equation: ! = 1 = 1 3 '[1 + 5] 678 9-. Where:
  • 28. 18 ' " is the input signal ! " is the output signal M is the number of points in the average, this can also be described as the window size. Based on a specified time window, a certain amount of data is averaged using a sliding window technique. This filter is useful for reducing noise in a waveform. When it is applied to a rectified signal, it is called the Average Rectified Value (ARV).Figure 2.3.4 shows the result of an 11-point moving average filter (M = 11) and a 51-point filter (M = 51). In this example, a rectangular filter is buried in noise (see the original signal on the left). As the number of points of the filter increases, the noise decreases, however this results in the edges becoming less sharp. The moving average filter is the optimal solution for this problem, it provides the lowest noise possible for a given edge sharpness. (Smith S. W., 1999) 2. Least-Mean-Square Method The Least-mean-square method is based upon the use of least-squared approximation which is used to calculate the Steepest decent in statistics. The error signal is the difference between the desired and actual signal. Since the signal statistics are estimated continuously, the LMS algorithm can adapt to changes in the signal statics, therefore LMS is an adaptive filtering method. (Lund University, 2011). The method works by finding the filter coefficients that produce the least mean squares of the error signal. 3. Maximally Flat FIR Filters Figure 2.3.4: The results of an 11-point moving average filter (Smith S. W., 1999)
  • 29. 19 Maximally flat filters are a family of maximally flat and symmetric filters which are known for the monotone and flat magnitude they exhibit. The advantage of these filters is that they are simple to design and they are useful in application where the signal is desired to be preserved with very small error near the zero frequency. Much like the windowing method, their filter coefficients can be solved in a closed- form solution, which is why they are relatively simple to compute. Furthermore, these types of filters allow a passband with a smooth frequency response to be achieved. 4. Minimax FIR Filters Minimax filters provide good control of the detailed frequency behaviour of filters, and allow the number of independent filter coefficients required for optimally designing an FIR filter to be reduced. This makes it a practical option for the filtering of signals. Infinite Impulse Response Filters An IIR filter has an infinite impulse response and unlike a FIR filter, they have feedback which makes them have much better frequency response. The following difference equation can be used to compute the output signal: ! " = :; ∙ ![" − 1] , ;-8 + $% ∙ '[" − *] 6 %-. Where: ' " is the input signal ! " is the output signal 0 is the feedback filter order 3 is the feedforward filter order $% is the feed-forward coefficients :; is the feedback coefficients (Wickert, 2016) The feedback makes IIR filters prone to stability issues that the FIR filters do not possess. Also in the case where the phase linearity is required, it is best to use an FIR filter since IIR filters do not possess linear phase characteristics, (Milivojević, 2009), however in the case of sEMG and F/T signal processing, this is not an issue. An advantage that IIR filters have over FIR is that they tend to meet a given set of
  • 30. 20 specifications with a much lower filter order than a corresponding FIR. (MathWorks, 2016). According to MathWorks, the classical types of IIR filters used in EMG signal processing are Butterworth, Chebyshev (types I and II), Elliptical and Bessel Filter. (MathWorks, 2016). Each of these filters can be used in the lowpass, highpass, bandpass and bandstop configurations. 1. The Butterworth Filter: This filter is best used for its maximally flat response in the transmission passband, minimizing passband ripple and is the most desirable filter for applications that require the preservation of amplitude linearity. (Luca, 2003) The behaviour of a Butterworth filter can be summarised by the frequency response function, which has the following formula (Taha, et al., 2015): <=(5?) A = 1 1 + ?= ?B A, Where: <= is the Frequency Response 0 is the Filter Order. As the filter order increases, the amplitude of the signal increases. ?= is the Cut off frequency. This is the selected frequency that values either above or below this will not be allowed to pass through in case of low or high pass filters; for a band pass and band stop configurations, there are two cut-off frequencies and value inside or outside of this range will not be allowed to pass through. ?B is the Sampling Frequency which should always be at least twice the highest frequency component that appears in the signal. (Welker, 2006). This can also be described as the passband edge frequency. 2. The Chebyshev Filter The Chebyshev filter is used to separate one band of frequencies from another. The primary characteristic is their speed which is a result of using a mathematical strategy that allows the rippling in the frequency response, which results faster rolling.
  • 31. 21 Type I Chebyshev filters are the most common types of Chebyshev filters which has a squared magnitude of response can be determined by this equation (Matheonics Technology Inc, 2009): <(5C) A = 1 1 + DAE, A C CB=FGH Where: 0 is the Filter Order. CB=FGH = 2JKB=FGH, is the Constant scaling frequency, this is equal to the pass-band edge frequency. C = 2JK, is the Angular Frequency EMis the Chebyshev function of degree N D is the ripple factor 3. Elliptical Filters Elliptical Filters have equiripple characteristics in both pass-band and the stop-band which means that it has equal ripples in both passband and stopband (BORES Signal Processing, 2014). The squared magnitude of response can be determined by the following equation (Matheonics Technology Inc., 2009): <(5C) A = 1 1 + DAN, A C CB=FGH Where: 0 is the Filter Order. CB=FGH is the Constant scaling frequency C is the radian frequency EMis the elliptical rational function of degree N D is the parameter that characterizes the loss of the filter in the pass-band N, O OPQRST is an elliptic rational function of order N.
  • 32. 22 A study was undertaken by Sharma, Duhan and Bhatia in which they carried out the filtering of a raw EMG signal using the Butterworth, Chebyshev I and Elliptic filters, the results of this study are available in Figure 2.3.5 . Each filter in this example was a low pass filter, but they each had varying input parameters, which can be seen in Figure 2.3.6. These figures show that in order to achieve similar levels of filtration, the input parameters must differ when using different methods. For example, Chebyshev filters uses higher passband and lower stopband frequencies than the Butterworth filter to get the similar output signal. Figure 2.3.5: Results of EMG filtering, clockwise, raw data, Butterworth filtration, Chebyshev I Filtration and Elliptic filtration. (Sharmaa, Duhan, & Bhatia, 2010) Figure 2.3.6 The input parameter used to filter the EMG signal (Sharmaa, Duhan, & Bhatia, 2010)
  • 33. 23 4. The Bessel Filter The Bessel Filter is a linear form of a filter that has a maximally flat phase delay, which preserves the wave shape of the filtered signal in the pass band. It has a smooth passband and stopband response, like a Butterworth. For example, for the same filter order, the stop band attenuation of the Bessel approximation is much lower than that of the Butterworth approximation. For a first order filter the magnitude response is (Bond) : <(5C) = 1 CA + 1 Where: C is the Sampling Frequency Normalisation The amplitude and frequency characteristics of the raw sEMG signals are highly sensitive to many factors. These factors include electrode configuration, electrode placement and skin preparation; these factors then vary between individuals, between days for the same individual and different electrode configurations. The same applies to the F-T output signal which varies between the various positions that the muscles are in during the test. Because this high sensitivity, it would not be a valid practice to directly compare the signal of the single muscle from a single subject to those of multiple subjects. Therefore the signals need to be normalised, to allow the raw signal to have a reference value to which it can be compared to. A “good” reference value is one that has high repeatability, especially when using the same subject under the same conditions. A reference value that is repeatable for an individual allows the comparison between individuals and between muscles. (Halaki & Ginn, 2012). The normalisation is usually done by dividing the EMG signal during the task by a reference EMG value obtained from the same muscle. By normalizing to a reference EMG value collected using the same electrode configuration, the factors that affect the signal during the task and the reference contraction are the same; the relative
  • 34. 24 measure of the activation compared to the reference value. (Halaki & Ginn, 2012) Methods of Normalisation There are multiple methods available for the normalisation of the sEMG signals, but there is no consensus on which is the best method is use. The methods are summarised as follows (Halaki & Ginn, 2012): 1. Maximal Voluntary Isometric Contractions This is the most common method of normalizing EMG signals and uses the EMG recorded from the same muscle during a maximal voluntary isometric contraction (MIVC) as the reference value. The process works by identifying a reference test which produces a maximum contraction in the muscle of interest. The test is repeated multiple times, producing multiple sets of data. The maximum value from the reference test is then used as reference values for normalizing all the EMG signals. This allows the level of activity of the muscles of interest to be compared to the maximal neural activation capacity of the muscle. (Halaki & Ginn, 2012) 2. Peak or Mean Activation Levels obtained during the task under investigation This method normalises the data to the peak or mean activity obtained during the activity in each muscle for each individual person separately. This method has been shown to decrease the variability between individual compared to using raw EMG data or when using MVIC’s to normalise. Furthermore, normalising to the mean amplitude has been proven to be better at reducing variability between individuals than normalising to the peak amplitude. (Halaki & Ginn, 2012) 3. Activation level during submaximal isometric contractions Whilst being the most popular method for attaining a normalisation reference, using maximal isometric contractions is not always a feasible method, for example, in cases where the subject is not able to achieved their maximum effort contraction because of physical limitations. Using submaximal isometric contraction resolves the instability of the EMG signal at near maximal levels. Furthermore, previous studies have demonstrated that using submaximal values produced reliability between days compared to maximal loads. (Sousa & Tavares, 2012)
  • 35. 25 4. Peak to peak amplitude of the maximum M-wave(M-max)(EMG only) This method involves the external stimulation of α-motor neurons. When a peripheral motor has been stimulated at a point close to a muscle, it activates the muscle to contract, which is called the M-wave signal. The amplitude of the stimulation is increased until the peak to peak amplitude of the stimulation is increased by an additional 30% which allows the maximum M-wave and maximum muscle activation to be attained. The maximum M-wave value is then used to normalise the EMG signals. However, this method is problematic because the accuracy of the M-max is questionable. Its reliability is sensitive to various factors such as muscle length and the task performed, however if these factors are controlled this method of normalisation has the potential to facilitate the comparison between muscles, between task and between individuals. (Halaki & Ginn, 2012) The Recommended Normalisation Method There are limited studies available which describe the techniques for normalising Force/Torque signals in an intelligent automation context specifically. Therefore, for continuity, the selected method for normalising the sEMG signals will be applied to the F/T signals. The literature written by Yuchen Zhao recommends using a normalisation method Peak or Mean activation normalisation. The reason for this is that “the muscles activates levels are not directly compared, but the activation patterns and their corresponding force torque datum are of interest”. (Zhao Y. , Al-Yacoub, Goh, Justham, Lohse, & Jackson). The reference value that should be used is the highest value obtained from the rectified data for sEMG signal and the peak value obtained from the raw F/T data. Both Force-Toque and EMG normalisation should include other relevant information such as joint angles and muscles length in isometric contractions and range of joint angle, muscle length velocity of shortening or lengthening and load applied for non- isometric contractions. Principle Component Analysis The final stage of processing in Principle Component Analysis. This stage removes any unnecessary components, reducing the signal down to its basic component. The number of principle components is either less than or equal to the number of original
  • 36. 26 variables. The principle components are the underlying structure of the data; where there is the most variance. (Dallas, 2013) The purpose of this stage is to allow the model to be used without a fixed electrode placement, which will make the method more adaptable. In PCA, only co-variance between the variables, the 8 channels of SEMG data are considered and re-ordered from the most important components of the least important component. For Force/Torque Signals 6 degrees of freedom are considered and re-ordered based on their importance. PCA Theory PCA is a linear transformation in which a new coordinate system is selected for the data set such that the greatest variance by any projection of the data set comes to lie on the first axis (this is the first principle component), followed by the nth greatest variance on the nth axis (Neto, 2016). Once the components of the data set have been re-ordered in this way, those with less importance can be eliminated. The eigenvalues and eigenvectors of the covariance matrix for the dataset must be found, in order to compute the importance of the components. The eigenvectors with the largest eigenvalues correspond to the dimensions that have the strongest correlation in the dataset, this is the principle component. Let '8, 'A, … . 'M be a set " 0 ×1 vectors and let X be the average: ' = '8 'A ⋮ 'M , X = 1 " ('8 + 'A + ⋯ + 'M) Let [ be the 0×" matrix in columns '8 − X , 'A − X , … . 'M − X = '8 − X| … |'M − X This process of subtracting the mean is the equivalent of translating the coordinate system to the location of the mean. (Camps, R.S.Gaborski, & Seung, 2005). Now to get the symmetric square covariance matrix, S: ^ = 1 " − 1 _
  • 37. 27 Then let `8 ≥ `A ≥ ⋯ ≥ `M ≥ 0 be the eigenvalues of in decreasing order with corresponding orthonormal eigenvectors c8 …. cM .These eigenvectors are the principle components of the data. In a lot of cases, the largest few eigenvalue of are much greater than all the others, which means that some of the first principle components explain a significantly amount of the total variation of data, i.e. greater than 95% therefore, the remaining components can be eliminated, this is Dimensional Reduction. (Jauregui, 2012). Figure 2.3.7 displays an example showing the results of dimensional reduction based on the results of PCA being carried out on a signal using a Matlab function. All of the methods of signal processing described in this section, involve very advanced and complex mathematics. However, those who will be collecting the signals will not always understand this theory, therefore there needs to be a platform available for them to carry out the post-collection processing. It is crucial that the platform is sophisticated enough that it can compute the complex algorithms in the background to produces the cleaner signals, whilst only requiring very limited user input. Artificial Neural Network (ANN) The purpose of processing the signals using the previously described sections is to produce clear signals that can be fed to an Artificial Neural Network. This stage is used to firstly derive a relationship between the F/T and sEMG data; then this Figure 2.3.7 A signal before (top) and post- dimensional reduction (bottom) (MathWorks, 2016)
  • 38. 28 relationship can be used to predict the F/T output from the sEMG signal. (Liu, Herzog, & Savelberg, 1999). An Artificial Neural Network is a biologically inspired method of computing which is thought to be the next major advancement in the computing industry and offers an initial understanding of the natural thinking mechanism. It is an information processing paradigm that is inspired by the way in which the biological nervous systems such as the brain process information and learn from experience. Figure 2.3.8 is a schematic diagram of a proposed ANN for hand force estimation. Currently, a machines ability to learn from experience is surpassed even by animals. Computers can do Rote learning which is a technique for memorising based on repetition, which means they are skilled at things such as advance mathematical functions. But when it comes to recognising patterns, computers struggle to this let alone reproducing those patterns in future actions. The human brain stores information as patterns and utilizes it for things such as facial recognition. This process of storing information as patterns, utilizing those patterns and then solving problems is a new field of engineering and is not yet achievable in robots. (DoD DAC ). Structure of an Artificial Neural Network An Artificial Neural Network is comprised of a large number of highly interconnected processing elements (neurons). This set of neurons is organizes into interconnected layers along chosen patterns. Each neuron unit,1, receives some kind of stimuli as an input from another unit or an external source. Each input, ';9, 5 = 1,2…has an associated weight , ?;9. The neuron then processes this input and sends through its related links into output neighbouring neurons. The output, !;, is computed using the activation function, K, of weighted sum of inputs. This is represented mathematically in the following equation (Mobasser & Hashtrudi-Zaad, 2005): Figure 2.3.8 Schematic diagram of a proposed ANN (Mobasser & Hashtrudi-Zaad, 2005)
  • 39. 29 !; = K "de; = K( ?;9 ';9) Modern ANN structures have moved away from the initial biological model to one that works better with statistics and signal processing. There are several types of ANN structures with variations relating to their topologies and search algorithms. It is extremely important that these networks are able to adapt to new environments, thus making them very reliant on learning algorithms. ANN models are also characterised by their activation functions, number of layers, neurons and the distribution of the connections. A typical neural network can be seen in Figure 2.3.9. It is an adaptive system that is comprised of the following four main sections: 1. A node that activates after receiving incoming signals(inputs); 2. Interconnections between nodes; 3. An activation function that transforms an input to an output, this is located inside of a node; 4. An optional learning function for managing weights of input-output pairs. (Tadiou, 2016) Figure 2.3.9: (Tadiou, 2016)
  • 40. 30 Case study application: This section describes the procedure that is carried out in order to collect the sEMG and F/T signal. The procedure was observed, and data was collected from it for the purpose of using it to test the software. 3.1 The Equipment The Thalmic Labs Myo Armband Sensor The Myo armband, see Figure 3.1.1 measures electrical activity from muscles using EMG sensors to detect five gestures made by the hands. The armband consists of eight built-in channel Medical Grade Stainless Steel EMG sensors. It also uses a nine-axis Inertial Measurement Unit (IMU) to sense the motion, orientation and rotation of the arm using three methods the accelerometer, the gyroscope and the magnetometer. This data is collated to get the joint state of the wrist, forearm and upper arm. The signals are recorded for a specified duration of time and saved using specialist software. The xsens MTw Wireless motion tracker This device is a highly accurate, completely wireless 3D human motion tracker; see Figure 3.1.2.This uses an IMU sensor to provide accurate measurements for the orientation, acceleration, angular velocity and earth-magnetic field. For the sake of this project, it will be fixed to a glove using it clipping mechanism, see Figure 3.1.3. This will allow the motion of the wrist to be measured. Figure 3.1.1 (Drew Prindle, 2015) Figure 3.1.3: The IMU sensor clipped onto a glove Figure 3.1.2: (We Are Perspective, 2010)
  • 41. 31 The 6-axis ATI Force/Torque Sensor This Force/Torque Sensor is a device that measures the output forces and torque in Newtons, from all three Cartesian coordinates(x, y and x), six axes in total, thus producing 6 components of signals in total. The system is made up of a transducer, high-flex cable, an intelligent data-acquisition unit and an F/T controller. It is a device most commonly used in industry for product testing and robotic assembly to ensure that the robotic arm only applies the force and torque necessary to complete the application. (ATI Inustrial Automation, 2016). In this case study, it will be used to measure the F/T output from the human arm that is applied during the Peg-in-Hole experiment, by installing it on the base of the pegs used in the experiment. 3.2 Experimental Method The capture for the sEMG and Force/Torque data is split into two stages: Primitive Calibration Stage: This stage is the collection of reference signals that the operator should be able to repeat, or at least get signals resembling it during the actual PIH experiment for both the sEMG and F/T signals. 1. Sensor Calibration Stage: This initial stage is to calibrate the IMU sensors. The aim of this is to ensure that all three of the sensors are using one Global Reference Frame; in this case the chosen reference frame is the one belonging to the black IMU sensor. The IMU sensors are designed to use True North of the Earth as their reference, however, they do not always stick to this due to variations in the magnetometer. The magnetometer in the IMU is influenced by the ambient magnetic field, which causes the location of the true north detected by the sensors to be inaccurate. Figure 3.1.4 (ATI Industrial Automation, 2016)
  • 42. 32 The calibration of sensors is carried out by firstly aligning them in the same direction on a flat surface (table), see Figure 3.2.1. Then place the sensors in the direction that the user is facing. Here, the black IMU sensor is used as a parent frame for the rest of the IMU’s. Joint state data collection Whilst the sensors will now be using one global reference frame, each individual human body has its own reference frame. The purpose of this stage is to calculate the joint state of elbow, shoulder and the wrist and in order to do this, the sensor body frame readings must be known in terms of their orientations. The process of collection is as follows: 1. Place the sensors on the arm at three different locations of the limb, seen in Figure 3.2.2. It is important the location of the sensors is the same with every experiment. For example, the location for the two Myo armband sensors can be selected by choosing a specific distance away from the tendon in the elbow. The position the centre sensors should be then marked with a marker pen so that is can be put in the same position every time. The IMU only sensor is worn on glove, therefore it can be assumed that it will be in same location every time. 2. The operator should make an ‘L’ pose as shown in Figure 3.2.3. Now all the orientations from individual frame will refer to this reference frame. This should produce the orientation of each sensor in quaternions. This measurement will be taken again at the end of the experiment to ensure that the sensors remained calibrated throughout the experiment. Figure 3.2.1 Sensor calibration Figure 3.2.2 Sensors on arm Figure 3.2.3 The ‘L’ pose
  • 43. 33 3. Now test the visualisation software to make sure that is in sync with the sensors. The user should move their arm freely in space in various positions. If the software is calibrated correctly, the 3-D model of the arm should move to the same positions where the real arm is moved, see Figure 3.2.4. 4. Wait 5 minutes and re-check the visualisation software, checking that the 3D model arm still moves in-sync with the real arm. Getting reference data for sEMG signal This stage involves collecting sEMG data from the two Myo armband sensors; so that the final data collected at the end of the Peg-in-Hole(PIH) experiment can be compared to these values for validation. The sEMG signals will be recorded when moving the arm in two different ways: not holding the peg and holding the peg. This is to show that that data collected is repeatable. 1. Hold the arm in front of the chest at a distance where the PIH set-up would be. 2. Position the hand in the same gesture that it would be if it were holding the peg. 3. Now move the arm freely in the space, roughly in the same area the PIH would occupy. The shoulder and elbow joints must be relaxed, and the elbow joint must rotate in the same way it would in the PIH experiment. The wrist should also rotate the hand in the same area as it would move if it were carrying out the PIH experiment. 4. Repeat step 8 twice, and record the sEMG signals for the duration of one minute, using the signal acquisition software. The first results are for training purposes and the second results are for testing purposes. Figure 3.2.4 Calibration of software
  • 44. 34 5. Now repeat step 8 and 9 twice, this time whilst holding the peg. The motion of the hand and arm should be the roughly the same as when the hand was not holding the peg. Getting reference for F/T signal This stage is where reference signals for the F/T signal will be acquired. A stationary peg that has been attached to F/T sensors will be clamped to the table to ensure that it will not move during the experiment, see Figure 3.2.5. 1. Hold the stationary peg using the same hand configuration as when the peg was held, for example if the thumb and index finger were used to hold the peg, the same should be done to hold this stationary peg. 2. Push down at different radial increments. The elbow should only go as far as it would during the PIH experiment; when this point is reached, the operator should return to the starting point, then start again. This step should be done for one minute, and signal should be recorded at the same time 3. Repeat step 11 twice, as with the sEMG signal acquisition, the first results are for training purposes and the second results are for testing purposes. Peg-In-Hole Experiment Humans can perform a large variety of seemingly simple tasks, but these are often difficult for robots to imitate. This is because whilst humans have learned and possess institutive skills in both grasping and performing assembly tasks, robots have not yet acquired this ability yet. In an assembly task, there are two primary subtasks, the first being the grasping of objects and the second being the actual physical interaction of objects. (Savarimuthu, Liljekrans, Ellekilde, Udet, Nemect, & Kriiger, 2013). The Peg-In-Hole task is an example of this task and has been studied numerous times with differing perspectives and objectives and is often used as an example of an assembly task. Figure 3.2.7 (Zhao Y. , Al-Yacoub, Goh, Justham, Lohse, & Jackson, 2015) Figure 3.2.5: Stationary peg with F/T sensor Figure 3.2.6
  • 45. 35 (Bodenhagen, Fugl, Willatzen, Petersen, & Kruger, 2012). It has been selected as the best activity to be carried out for the acquisition of the sEMG and F/T data. The Force/Torque data is acquired using a 6-axis ATI force torque sensor which is installed on a fixed plate. 1. Hold the peg using the same configuration that was used to hold the stationary and freely moving peg before. 2. Perform the peg-in-hole task, by carrying out the following simple steps: approaching, insertion, releasing and waiting. Do this for one minute, whilst recording the signal. 3. Repeat step two 12 times, each time the peg should start at roughly the same position. 4. This process should be done on two pegs of different diameters (15.8mm and 16mm). This will result in a total of 24 experiments being carried out. 5. Now check that the sensors stayed calibrated throughout the duration of the experiment, by collecting the joint state data in the same way as before. If the results are significantly different, the entire experiment will have to be repeated. The results of this experiment can now be uploaded to the software where it will be processed so that they are suitable for putting into the ANN. Figure 3.2.8: Peg-in-Hole experiment
  • 46. 36 Section 3: Software Design 4.1 Purpose of the software The raw sEMG signals and F/T signals collected from the previously described experiment are subject to a significant amount of noise and unnecessary components. In order for the two to be mapped together and produce a predictive model that is both accurate and reliable, the signals need to be processed so that they are clear enough to be correctly interpreted. 4.2 Software Requirements: “A user interface is well designed when the program behaves exactly how the user thought it would” (Splosky, 2001). This means that the software carries out all the actions that is has been designed to do as well as any actions that the user would expect it to do. The software created in this product must be able to fulfil the following requirement: Stage 1-Signal Processing Input 1: Surface Electromyography (sEMG) Signal 1) Input the raw data from the Surface Electromyography sensors 2) Rectify the sEMG signal 3) De-noise the signal using a suitable filtration method 4) Carry out the normalisation of the signal 5) Carry out the Principle Component Analysis Input 2: Force/Torque Signal 1) Input the raw data from the from the Force/Torque sensor 2) De-noise the signal using a suitable filtration method 3) Carry out the normalisation of the signal 4) Carry out the Principle Component Analysis Stage 2- Final Model Production and Artificial Neural Network This stage is where the user will compile, view and save the resulting signals after processing. Input 1: Surface Electromyography (sEMG) Signal
  • 47. 37 1) Load and compile processed data onto same plot 2) Save the data from plot Input 2: Force/Torque Signal 1) Load and compile processed data onto same plot 2) Save the data from plot Stage 3: Launch Artificial Neural Network program 1. Launch Neural Network Fitting Application available in Matlab and close current signal processing software. A comprehensive flowchart that depicts the flow of the software been seen in Appendix II 13.1.
  • 48. 38 Method Investigation This section will describe and explain four different options available for the creation of the signal processing software. Each method has been trialled by attempting to process signals and create a basic Graphical User Interface(GUI). Finally, a matrix has been created comparing each method against specific criteria, giving each method a score which helped to make the final decision. 5.1 Matlab: Matlab is a procedural programming language developed by Math Works, a company who specialize in mathematical computing software. Matlab can be used for mathematical functions, such as an advanced calculator, plotting of functions, implementation of algorithm and the creation of user interfaces. Matlab also has the capability to interface with many other languages including C, C++, and Java etc. It is estimated that there are roughly 1 million Matlab users across the globe (EE Times, 2004), and these users come from a variety of backgrounds in science, engineering and economics. The functions to carry out the signal processing are available in readily Matlab. This can be can be used along with the Graphical User interface tool called ‘GUIDE’ to create the User Interface. Figure 5.1.1 shows some of the controls available to build the GU such as buttons and text boxes. Figure 5.1.2 shows “Property Inspector’ which contains the editing tools available to edit the controls. Figure 5.1.1 UI building toolbox Figure 5.1.2: Property inspector
  • 49. 39 Trial of method This user interface capabilities of Matlab had not previously been used, therefore it was decide that the trial would only check how well this aspect worked first using a simple function, rather than trying to combine the signal processing ability too since it would have been to complex. For this example, the user is required to input a value in the blue text box then press the “compute” button to create the 3D shade surface plot (pictured). The plotting code is written in the Matlab editor, and this is how all controls and functions on the user interface are programmed as well. After some basic coding, the following user interface was produced: The signal capabitles were checked by atempting to recitify raw sEMG data from the experiment. The results are displayed as follows: Figure 5.1.5: Raw data The user input a value here The user then presses this button, which produces the 3d Figure 5.1.3: Trial of GUI capability All the values are now positive, because all the negative components from the raw data became positive. Figure 5.1.4: Results of sEMG data rectification
  • 50. 40 There are plenty of tools available to design and create the user inteface and the GUIDE tool works quite well. Furthermore, Matlab has already got inbuilt functions that allow the procesing of the signals. However, the Matlab software automatically inputs coding known as “comments”, which does not actually give instructions to the GUI . In this example the majority of the code was produced by Matlab automatically after putting in each control, the actually function code was only 3 lines. The actually project will require code that is very long, so it could be too tideous to try and produce this when Matlab contributes so much already. However, it is possible to disable this functionality so that the only code is what actually gives instructions to GUI. 5.2 Visual Basic with C# Code Visual Basic (VB) is a programming language and environment developed by Microsoft. VB was one to the first products that provides a graphical programming environment for developing user interfaces. Users of VB can add code in by dragging and dropping controls such as buttons and dialogs and then defining the properties, rather than programmatically altering the user interface. VB is an object oriented programming language, which means that it is event-driven and therefore reacts to events such as button-click. The user interfaces within VB can be created and modified programmatically for various languages such as C, C++, Pascal and Java. It sometimes called a Rapid Application Development (RAD) system because it enables users to quickly build prototype applications. (Beal, 2015) C# is a simple modern programming language that works in an object orientated format. It is very similar to other languages such as C and C++. (ecma nternational, 2006). Object- orientated programmes are made up of two components the first being objects, which are data structures which contain data in the form of fields, and the code which gives the programme methods to carry out. Figure 5.2.1: Events
  • 51. 41 Visual Basic has an extensive list of “Events” which could be executed in the software. These ranged from simple ones such as edit-text, to those which are also simple to implement, yet make the software look very advanced, such as mouse- hover. A list of these events can be seen Figure 5.2.1. There aren’t any functions readily available in VB that can carry out the signal processing. However there are online libraries such as Git-Hub where it might be possible to find code written in C# and store them. The code would then call these stored commands and process the signals. Trial of method Designing and building a basic user interface was very easy using the software. Visual Basic has the capability to build a Windows Form with various tabs. Figure 5.2.3 and Figure 5.2.2 shows the attempt that was made. VB allows the creation of multiple tabs which helps to keep the software concise. This feature can be used to have several tabs where the processing of the two different types of signals would take place. The user would be able to click the “Load File” button which would allow then to locate the text file counting the raw signal data. This would then be plotted and displayed in the window seen in Figure 5.2.2 Figure 5.2.3: GUI trial This is where the signal could be displayed. After each stage of processing is selected, the plot will automatically update itself. Figure 5.2.2: GUI trial
  • 52. 42 For the purpose of demonstrating the potential design and layout of the software, the signal displayed in Figure 5.2.2 was actually plotted using Matlab and loaded into the UI as a picture. After carrying out extensive research into ways in which data could be plotted and processed within a Visual basic windows Form, it was deemed too complicate and near impossible to plot data, let alone carry out the signal processing. 5.3 Visual Basic with Matlab: The following methods combine the signal processing functions in Matlab, with the GUI tool available in Visual Basic. Matlab Coder: There is an application available in Matlab that allows converting the code written there into C++ code. It generates readable a portable C and C++ code from Matlab code, including the vast number of existing mathematical and graphical functions available. (MathWorks, 2016).This method could be used to write the signal processing functions within Matlab and use the Coder application to convert it to C++, which can then be used in Visual Basic where there UI is being created. Trial of Matlab Coder This method trial by writing a simple code to load a matrix from a text file, whilst ignoring the first column and row. This is the function to be converted into C++ which can be used in visual basic. When attempting to trial this method, Matlab itself kept crashing, see Figure 5.3.1 for a screenshot of this error. After trying several attempts, it was decided that if the method was going to cause so many issues even when try to covert the most basic function it was best to stop the trials and look at other methods. Figure 5.3.1 C coder crashing Matlab
  • 53. 43 Dynamic Data Exchange (DDE) Dynamic Data Exchange is a method of transferring data between programmes that was originally adopted by Microsoft Windows. “It sends messages between applications that share data and uses shared memory to exchange data between applications” (Microsoft, 2016). The DDE facilitates the “Client and Sever” methods. Trial of DDE Both Matlab and Visual basic support this method; therefore it is possible implement many functions from Matlab in VB. (Cerqueira & Poppi, 1996). In this case, Matlab will act as the server and VB is the client. The method would work by creating the User interface in Visual Basic and sending the signal processing commands to Matlab. The resulting plots will then be sent back to Visual Basic where it is displayed to the user. This method was research extensively using various online libraries and forums giving expert advice. However, all the suggestions showed that the method would be far too complex to carry out and is beyond the scope of this project. 5.4 Methodology Comparison This section reviews each of the four options that were trailed, discussing their advantages and disadvantages, and then finally presenting the final chosen method based on evaluation of the trials. Matlab Only Although Matlab is not built for designing user interfaces, its capabilities are excellent. One of its key advantages is its excellent built-in signal processing capabilities and the fact that is already has all the functions required for each stage. This means that additional API’s will not have to be sourced elsewhere. Furthermore, Matlab has advanced mathematical capabilities and has advanced graphical capabilities. This means that it will be possible to plot and display the results of the processing, allowing the user to get a visual image of what is happening to their data. However there are several downsides to using this method, an extremely significant one is that this method will be have be self-taught since there is no previous experience or formal training has occurred in using this method to create software
  • 54. 44 prior to this project. This will significantly increase the time spent building the software. The code required to create even the most basic GUI is complex and extremely lengthy as Matlab will generate quite a significant amount every time a new control is added. This software also has a very limited range of events which could be executed, which will limit how advanced and professional the final software will be. Also, in-terms of aesthetics, Matlab has very limited tools dedicated to creating graphics. For example, it is only possible to use block colours, they cannot be made into gradients or patterns to make them more visually appealing. Finally, the only way to check progress of the programme is to run the commands and see the results, which slows down the development process. Visual Basic Only Visual Basic’s main function is to build GUI’s, therefore it has excellent User interface building capabilities, with a wide range of tools available, ranging from simple buttons and message boxes to more complex to programme tools such as timers and performance counters. Many of these tools can be physically placed on the Form with limited amount of additional programming required to control them. Whereas many of these features have to be created programmatically in Matlab such as help and message prompts to the user. Furthermore, it is extremely easy to change the appearance of the form to make features bolder and make the UI professional looking, without having to simply rely on block colours. For example, VB allows pictures to be easily imported into “picture boxes” on the form, which would allow the background to have a gradient and be sleeker looking. Finally, this method was previous has been previously learned and therefore it could be used with confidence as only limited amount of additional learning would be required to achieve the desired software. The obvious downside is the lack of readily available mathematical and graphical functions, which does not allow the signal processing to take place by purely using VB; it could create a really attractive GUI that does not compute the necessary functions to meet the basic requirements of the software.
  • 55. 45 Matlab Coder-C++ Generator The principle behind this idea it exactly what is needed to easily implement the signal processing commands from Matlab into Visual Basic. The benefits of this method would be that would allow software to be designed using VB’s excellent tools and graphic making capabilities, whilst taking advantage of Matlab’s powerful mathematical and graphical plotting capabilities. It would allow the software to achieve the desired functions capabilities required and the professional aesthetics and features. The disadvantages of using this method, is the most significant belonging of any of the four trialled methods. This was that the Coder application kept crashing when attempting to convert the most basic Matlab functions into C code. Therefore, the conversion process must be such a demanding process that it would take an extremely long time to convert the potentially complex signal processing functions. Dynamic Data Exchange Like Matlab, this method has the potential to combine the excellent attributes of both Matlab and Visual Basic. However after carrying out vigorous research into this option and the methods of using it, it was decided that it required advanced programming skills and knowledge beyond the scope of this projects. 5.5 Conclusion of Methodology: In order to reach a decision on what method to use to create the software, a comparison matrix was created which can be seen in Appendix II 13.5. Matlab scored the highest and was therefore selected. To summarise, it was selected for the following reasons: ü It has excellent mathematical and graphical computational capabilities ü It has graphical user interface generation capabilities that meets the requirements of the software that will be designed
  • 56. 46 User Interface Design The main aim of this this project is to design software that will allow users to easily process their raw data signals; the keyword is “easy”. The software must be easy to understand, navigate and control by a user who is not necessarily an expert in signal processing. The user must be able to achieve a high level of signal processing whilst having minimal input, meaning that the software does the majority of the work for them. Even if the software is able to provide all the functions to carry out all the stages, it would be rendered useless if the user does not know how to navigate the software in order to do them. This section explains the research and process that has been carried out to get the final design. The design of the user interface must achieve two things: 1. Easy to use and navigate (Usability) 2. Aesthetically pleasing 6.1 Usability Usability has many definitions, a more specific one is “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” (Peuple & Scane, 2003). The term usability it considered by many to far more quantifiable than the term “user- friendly”. In his book Usability Engineering, Jakob Nielsen stated that usability can be broken down into five key components: I. Learnability: The software should be so easy to use that users can quickly start to use it; II. Efficiency: The software should be quick to use for example less keystrokes, thus enabling a high level of productivity III. Memorability: If the user should return to the software after a long period of not using it, it should not be necessary for them to re-learn how to use it; IV. Errors: The system should have as little errors as possible, it should be possible to recover from errors and should prevent catastrophic errors from occurring ; V. Satisfaction: User should find the software subjectively easy to use (Peuple & Scane, 2003)
  • 57. 47 Several methods that can increase the usability of the software will be explored, these are: i. Providing instructions on how to use the software before moving onto the processing stages. This could be very brief or in- depth to describe each stage. ii. Help notes that appear when the user hovers over the button belonging to the functions they want to use. iii. Limit the amount of input the user has, to avoid leaving room for errors. For example, only allowing the user to carry out the stages in one order. iv. Where the user is required to have more significant inputs, they must be told what type and range for example when inputting the Cut-off frequency during the filtration stage must be an integer in between 1-20 Hz. If they do not put a value that meets the criteria they must be prompted of their error and be given the chance to change it. 6.2 Aesthetics Much like any other type of product, the aesthetics of the software are extremely important for many reasons. Firstly, if the product is to be sold in industry, it needs to be marketable. This means that it must have its own brand consisting of a distinctive name, logo and tag line. Secondly, the aesthetics will influence how likely people are to use it. Even though the look of the software does not affect its ability to carry out the desired processes, it does affect how easily the user can operate it. Layout Several layouts for the software were designed; most of them focus on how to display the resulting signal from the processes. These layouts were selected based upon research done into the current scientific software user interfaces. The compilation of this research can be seen in Appendix II 13.2 and 13.3. 1. Layout One This layout has the plotted results displayed next to the buttons that the user presses to carry out each process. It is a very simple display; see Figure 6.2.1: Layout one, that will allow the user to avoid working through multiple windows which can be tedious. However, having each process one window makes it possible for the user to go through all the processes in the wrong order, thus producing the wrong results.
  • 58. 48 Also, there are various inputs that the user must give to the software and leaving room on this single window will it extremely cluttered. 2. Layout Two: This layout has the different stages carried out on multiple tabs that the user can go through, see Figure 6.2.2: Layout two. However, for this to work, the software must only allow the user to go through the processes in one order. This can be done disabling the previous tab when the user starts a new process. This layout was not used because Matlab’s GUI does not allow for the creation of tabs. 3. Layout 3 This layout has each process carried out on a different window, see Figure 6.2.3: Layout three. Although this means that the user will have to go through multiple windows, it is easier to organise all the controls such as buttons and user input text boxes. Furthermore, this layout forces the user to only carry out the processes in a single order, unless the capability to go back to a previous stage is provided through a ‘Back’ button. Figure 6.2.1: Layout one Figure 6.2.2: Layout two
  • 59. 49 Colour scheme Colour is an extremely important part of user interface design as they play a vital role in how the user interacts with the software. If a colour scheme is chosen that consists of various bright colours or only dark colours, it can make the user interface very difficult to view and lettering difficult to read. To select a colour scheme, Adobe Kuler was used. This is a website that allows multiple colour schemes to be generated based up how well suited each colour is with one another. Although many different schemes were generated using this method, for the purpose of this report only three will be discussed. Each was trialled on the welcoming and instruction page to show how well they worked and one was selected. Colour Scheme One: This scheme, Figure 6.2.4 has a good range of colours from bold (orange) to more typical colours used in scientific software (blues). The contrasting colours will help to make features such as buttons and text boxes stand out. Figure 6.2.3: Layout three Veribus Figure 6.2.4: Scheme one
  • 60. 50 Veribus Figure 6.2.6: Scheme three Veribus Figure 6.2.5: Scheme two Critical Review: The colours used in this scheme are too bright which makes it uncomfortable to read the text on the screen. Furthermore, the use of these bright colours makes the software look less professional. Colour Scheme Two: This scheme uses proffesional and corporate looking colours that are very similar shades. None of the colours are too bright and overwhelming, and they all complement each other well. Critical Review: The colours do not contrast each other enough; therefore features such as buttons will not stand out enough. Furthermore, the use of similar colours creates a very monotonous display that lacks interest. Colour Scheme Three: This colour scheme consists of colours which contrasts each other and those that
  • 61. 51 complement each other. The use of bolder colours will help to highlight features and communicate the different roles of these features. Furthermore, the use of corporate blues will give the software a professional look. Critical Review: Certain colours, namely the bright pink make the software look less professional and may need to be toned down if this scheme is to be used. Colour scheme number three was selected because of the professional look the colour give the interface as well as the brighter colours which help to highlight features such as buttons and user input text boxes. However, some of them will need to be toned down to make the text more readable.
  • 62. 52 Description of Software This section gives a brief overview of the final software, a page-by-page user manual can be found in the The Veribus User manual which is a separate document. Name of Software: Veribus (meaning Human Force in Latin) Platform: Matlab The completed software comprises of seven windows, each of them connected to each other. Firstly the software opens up to a welcome screen , see Figure 6.2.1 7.1 Introduction to Software The purpose of these pages, see Figure 7.1.1, is to describe to the user the stages of signal processing that this software will allow them to achieve. It gives a short description of each stage and what it will do their data signal. There is some essential information that the user will need to know before they can begin to use the software; these pages also give the user this information. For Figure 6.2.1: Welcome page Figure 7.1.1: information pages