For anyone new to Vision Systems, or those with little experience, this
presentation breaks down the task of developing a Vision System into 7 pieces of a puzzle, giving hints and tips that will help you avoid some of the pitfalls and allow you to create a great vision system.
8. Correlation
Correlation based
Only works with rotated
translated matches (not
Sub-pixel accuracy
More cost effective and
10x) in most
“If Greyscale pattern
use it”
Geometric
Edge based algorithm
Allows scaled, rotated and
matches
Sub-pixel accuracy
Greater tolerance of
Can cope with occlusions
Pattern Matching
9. Blob
Only works on binary
Faster and more cost
Feature rich
As a threshold is used,
can cause problems
Edge
Works on greyscale
images
Sub-pixel accuracy
Similar feature list to
analysis
Not affected by
Analysis
Original
Threshold
Count
10. 8 & 3 Starting
to Merge
0s Still Not Continuous
Characters;Would Read as
Lowercase ‘n’.
Character
Recognition/Verification
14. Clear Visual Display
for the Operator
1 Operator ID
2 Project ID
3 Previous results
4 Statistics selector
5 Master image
6
Last test image
7
PASS result display
8
Last test results
9
Camera selector
10
Trigger
1
2
3
4
5
6
7
8
9
1
0
15. For a failed test, both Master and test images are displayed,
with a Red indicator box around the failed feature
11 Fail
Fail
Fail result display
12
Failed image &
master image
comparison
13
Failed feature(s)
detail and actual
recorded values
1
1
1
2
1
3
Fail
0
23. Ask The Experts
We Can Help With…
– Vision Software Design
– Machine Vision Training
– Operator Interface Design
– Mechanical System Design
– Commissioning
– Support
24. Any questions?
Thank you for your attention
Please visit our stand #48 for live demos
www.clearview-imaging.com
Editor's Notes
Sometimes we come across people who doubt vision systems or have a fear that they might not work for their application.
For the next 15-20 minutes, we are going to look at the Machine Vision issue and imagine it as a puzzle.
As with all puzzles with time, knowledge and dedication, they can be pieced together to solve and that’s how we will end up with a successful vision system.
Vision systems have been around for a long time. You can see here one of the first vision systems ever developed.
This is looking back 50 years ago and it was incredibly expensive, extremely large and terribly slow.
If we think about all the incremental changes in technology that have happened since then it is quite incredible (really).
To think that the power and abilities in vision systems have increased so much, to extent of being able to do all of that within a smart camera such as the one you can see here, that can literally be held in my hands. We are at the point now which we can say that there are millions of systems deployed over the world. This is now proven technology that has evolved over 50 years at a rapid pace, especially over the past 10 years in terms of technological advancement and deployment.
If we use the rotten apple analogy here, yes there are some bad apples within vision, however, just because a system fails every now and then, this does not mean that all vision systems are bad, and there are always reasons why those systems didn’t make the grade.
So, let’s get into the real substance of this presentation and try to answer ‘Why do vision systems fail?’
As mentioned, we are going to break this down into 7 pieces of a puzzle. Not all of these pieces are technical, but they are all vital for a working system. Let’s take at look at these one by one.
The starting point here is software and it particular, algorithms, which are the brains of any vision system.
There are several of these algorithms that exist depending on what you application needs to do.
It is important that the correct algorithm is used for the correct application.
I am going to run through 3 of the most commonly used ones in vision systems today.
Pattern Matching can locate a part or an object in a field of view and verify whether it is there or not, it can help guide robots and it can detect if all pieces of a larger part are present. The images towards the left hand side, the T and L shapes could be picked and placed by a robot and on the right hand side we can see some semi-conductors and we want to see whether those pads are present or not.
There are tow traditional types of pattern matching: Geometric & Correlation (also known as Normalised Greyscale Correlation)
Geometric is the more advanced way of doing pattern matching and it is an edge based algorithm.
So it will be looking for the edges of each object that has been defined. The advantages of this are that we can do part location when these parts are scaled, rotated or displaced.
A 50% increase or decrease in size will still be detected and rotated parts can be detected with minimal additional CPU usage as it is done intrinsically as part of the programme.
As this is looking for edges, changes in lighting can also be dealt with.
Occlusions and overlaps can also be detected.
Correlation is looking more simplistically at pixel value. So the pattern is detected by the matching of those pixel values.
This is an extremely fast and efficient algorithm, but its is not god with lighting changes as this could cause the part to not be detected.
Rotations and translated matches will work, which makes it slightly less robust.
But the general rule here is “if greyscale pattern matching works, use it”
So you would not need to pay for more expensive licenses or computational power, however if you r application has those restrictions, then you would move up to geometric.
If you wanted to count pills in a foil packet or washers perhaps, you could use an analysis algorithm to do so. There are two main types of this, one being edge analysis and the other blob analysis.
Edge:
Works on both colour and greyscale images
You can get sub pixel accuracy, as shown in the bottom image on the left, with the line passing through the pixels.
Lighting changes will not affect the detection.
Blob:
Very basic, one of the first algorithm developed, still important today.
Fast, cost effective and proven.
Works on binary data. From the images you can see the original greyscale, then you have to do a threshold to make it a binary image. This means making any value above ‘128’ for example is a 0 and anything below is 1. This then converts the whole image into either a 1 or a 0.
This means that uneven lighting will cause issues.
Both have a similar feature list, meaning you can find further mathematical properties, such as the contour, elongation, or the centre point.
This exemplifies again that a simple, consistent set up will have a simple and efficient algorithm to help solve issue. But as complexity arises, a more in depth algorithm can be applied to work around this.
Here we are looking at character recognition and verification.
Inkjet printing is often used on the side of food and beverage products, and an example of this can be seen on the screen. Where for the human eye, it’s very easily read ‘8313 5 P1 07:10’.
This isn’t easy so easy for a machine to read however. With the varying background, gaps, lighting changes and other factors all inhibiting its ability.
Traditional character verification programmes use a pre-processing step join the dots to be able to read these characters, which you can see in the middle line. As you can also see, this starts to cause issues. The 8 and the 3 are beginning to merge together, but the 0 at the end is still not a continuous character, meaning it is unable to be read. To make that into a continuous character, you would end up with the bottom line. Which is all merged together and equally unable to be read for verified by traditional methods.
This is where you would need a really advanced algorithm.
An algorithm such as SureDotOCR, which is Matrox's patented algorithm. With moves away from traditional ‘merging’ of the dots, and it analyses each dot as an individual.
It can deal with text being stretched, rotated, italicised, different fonts and lighting variations.
The point being made here is that for extreme complicated scenarios, specific, advanced algorithms might be required to deal with it.
Complexity.
Its important to remember than very often vision systems aren’t used by vision experts. They are typically used by the general workforce within a company, who don’t have specific vision knowledge. Therefore, in order for a vision system to be utilised successfully, it has to be easy to understand and use.
In the case here of a very overwhelming GUI, this is often a common piece of feedback. They must be easy to understand and the information that is being outputted must be easily recognisable.
To provide an example of a clean, easy to understand interface, here we can see Matrox’s SureDotOCR again with clear indication items that have passed or failed, with simple feedback.
It is also based around HTML 5, so it is easily accessible from a web browser and very easy to follow.
So if we were to look at a more real life example, this is what we call Vision Box and it shows the display generated for the operator.
This is typically used by automotive part manufacturers, so as you can see, in this case we are looking at a radiator.
You can see the master template image being displayed at point 5, with the inspected image below at point 6.
Point 7 shows a clear indication of the PASS or FAIL status of the inspected part, whilst the previous inspections are also displayed below. All of which is initiated using the point 10 trigger button.
What is shown here is that there is no overload of boxes and information. The important parts are displayed so that the operator can have immediate and simple access to all that is required, without confusion.
Here is the display when a part fails.
The FAIL indicator is clear at point 11.
At point 12, where the inspected image is displayed, a red box appears to outline where the part has failed.
Flexibility.
Another piece of feedback that the world of vision often receives is regarding the setup and then any ongoing changes/maintenance that need to be applied to system.
Often, the original vision experts will have to be called out to adjust or update something, that could very well be just a menial task.
This is where consideration of how you build your visions system comes in, and what programme's you use to build the application.
This is where it is worth considering a programme such as Matrox Design Assistant, which is a flowchart based machine vision software.
Depending on your skill level and whether you have the time, you might want to build the vision system yourself, or you could get a vision expert to do so.
But once the system has been deployed, during a handover period, you can be taught how to make changes and edit the flowchart. This is because there is no coding involved here, it is blocks of a flowchart that make this vision system, which shouldn’t be too intimidating.
Setup.
Setup must be done correctly and isn’t something that can be rushed when creating a vison system.
When the software side of things is completed, it is crucial to remember that vision systems often have contact with production areas. Therefore all hardware, sensors, illumination, cables, everything has to be well placed and well connected. So that the cameras can trigger properly from the correct positions to give the vision system to best chance of performing well.
Within the setup of your vision system, it is important to verify type of I/O that its using.
With polling I/O, the state of the system is checked periodically.
If we look at the image in the middle here, if the status of the system isn’t checked at the correct time, the bottle could be too far down the conveyer belt and be missed by the cameras and in turn by the whole system.
Real time I/O removes this possibility from happening. SO as soon as it changes state, the vision system will act, ensuring the vision system can function as desired.
Okay, so, you’ve made it through the first 4 stages, you have your algorithms, it is all setup, it’s understandable and it’s manageable. But your system still might fail, why?
Sometimes, within a manufacturing environment, the vision system has been pushed for by an engineering team, but not so much by the production team working on the floor. Resistance is often seen against vision systems as it is often seen as a competitor for those jobs.
A reluctance to work with a system can see those systems not used properly, switched off or even damaged.
This is where as a company installing a vision system, it is important to get that ‘buy in’ sentiment, that the vision system is there to assist and it needs to be accepted from top to bottom within the company.
Cost is always a key contributor.
Fully installed vision systems could be from as low as £10,000 (possibly less), but equally they could be up to hundreds of thousands of pounds.
It is important to analyse this beforehand and make sure things like return on investment are calculated prior. Looking at how much it costs to produce you products and how much failures then cost you, so that open discussions can take pace with the vision system company, so that your expectations are clear up front, ensuring that cost targets are met and that it isn’t an inhibitor to the project.
Lastly, it is important to remember that not everyone has the knowledge to create these vison systems.
Knowledge can of course be passed on and taught, with lots of courses available both online and in person these days, but it requires the time and dedication to gain.
If this isn’t something you can dedicate to, this is where we, as machine vision experts can help.
If you’re sure to remember these important pieces of the machine vision puzzle, you will be sure to end up with a great vison system.
We have a team of experts, not only within our engineering team but within our sales team as well, who are all qualified machine vision professionals.
We have the capability to assist with…
Vision Software Design
Machine Vision Training
Operator Interface Design
Mechanical System Design
Commissioning and technical support
Thank you very much, it’s been a pleasure talking to you all today.
You can ask any questions here or come over to our stand, #48, and we’ll be happy to assist you with any questions and show you some live demonstrations.
Thanks again.