AT logo - The Machine Vision Specialists  
  Cameras  
  Frame Grabbers  
  Lenses  
  Lighting  
  Software  
  Vision Systems  
  Accessories  
  News  
  Careers  
  Contact AT  
  Site Map  
Request product information

Contact
info@adeptturnkey.com.au

Perth:
(08) 9242 5411

Sydney:
(02) 9905 5551

Melbourne:
(03) 9384 1775


Defence Recognised Supplier Scheme Logo

 

How to evaluate a pattern-finding tool

 

Once a machine-vision system has acquired an image, a software image-analysis algorithm, known as a pattern-finding tool (PFT), must determine whether the image contains a particular pattern. While many people consider a pattern to be composed of one or several objects of interest, it actually includes anything that can be represented by a group of pixels in an image.

A PFT must be trained before it can recognize a pattern automatically in an unknown image. In many inspection applications, the training includes storing a "golden" image (an image with no defects). The application detects defects by subtracting the golden image from the acquired image. To ensure a proper subtraction, the system must first align the images accurately. A PFT finds the pose (position and angle) of the object of interest in both the target and golden images and then performs the alignment in software.

Pattern-finding techniques

Traditionally, PFTs were based on a technique known as normalized cross correlation (NCC). The NCC technique is simple to implement and easy to accelerate, but because NCC representations are calculated from all pattern pixels, it suffers from several major drawbacks.

First, the target pattern must always be placed in the field of view at approximately the same angle and distance from the camera. It also cannot locate smaller or larger objects of the same shape. Second, NCC cannot describe a non-rectangular pattern. So, if the object of interest within the pattern is not a rectangle, you must train the system on the smallest rectangle that includes the pattern. This presents some important drawbacks, such as an inability to locate touching objects or handle varying backgrounds .

Finally, the NCC technique cannot detect a pattern with different levels of brightness. A pattern consisting of a white circle on a black background will never match a black circle on a white background.

More recent geometric-based techniques overcome these limitations. A geometric PFT doesn't care about rotation and scale and can accommodate many types of image constraints, such as nonlinear brightness changes, video reversal, noise, occlusion, and touching objects. As a result, a geometric PFT can significantly increase the efficiency and reliability of your application. When it's time to evaluate and choose a PFT for your machine-vision system, you should concentrate on two main criteria: ease of use and performance.

Ease of use

Look for a PFT that provides automated training and execution mechanisms. This means that the PFT must automatically compute all parameters used to tweak the algorithm's behavior from the training and target images.

It also helps if the PFT has a highly visual and intuitive training tool. In a geometric PFT, the user should have the option of manually editing selected contours from the training image. The training tool should then compute its representation from the image alone-that is, without further user intervention. The user should still be able to apply feedback from the internally calculated values to adjust the representation, ensuring optimal results during the actual inspection.

During execution, the pattern-finding algorithm must also be able to adapt itself to changes in the target image. As with the training tool, users should be able to fine tune these settings for increased performance.

The PFT should also be easy to program. A highly abstract application programming interface (API), coupled with a graphical user interface, tends to shrink the learning curve and minimize development time. Users can think of the PFT as a "black box." The input is an image, and the output is the pose of a pattern with default parameters. You should be able to learn to fine tune those parameters using the API within hours, not days or weeks. It doesn't matter which programming language or software technology the PFT supports, as long as the tool provides a simple architecture, good documentation, and a complete set of code samples.

Performance

A PFT's robustness is defined as its ability to find a desired pattern in an image containing it and to fail to find the pattern in an image not containing it. A highly robust PFT will reduce false failure rates to a minimum, regardless of whether the pattern is situated at the same angle as the trained pattern. With some PFTs, slight differences in board angle during inspection can produce large numbers of false failures. An angle-invariant PFT can locate the pattern at almost any angle in the target image without being affected by the angle change. A scale-invariant PFT can find the pattern at any size.

A robust PFT must also be able to handle degraded images. Poor lighting often results in a varying background. PFTs must handle reflections on metallic or glass surfaces that create artifacts in the image, as well as noisy, saturated, poorly contrasted images, blurred images, and other degradation. Objects may also be touching or partially occluded.

How do robustness constraints affect execution time? Image degradation and other variations from the stored pattern (such as rotation and scale) make the algorithms slower and more complex. Other factors include the number of patterns per execution and the contour density in the target image. A good PFT provides the best average speed consistent with these constraints. If the software can run on embedded processors, users can execute it on distributed-processing systems to increase running speed. If the PFT is not fast enough to keep pace with the input data rate, it should allow users to adjust certain criteria, such as accuracy.

The positioning accuracy of a PFT is a measure of the difference between an object's real pose and its measured pose. Influences that affect accuracy include:

  • poor illumination,
  • fuzzy or blurred edges on the object of interest,
  • radial distortion caused by the camera lens,
  • jitter introduced in the digitization of the video signal, and
  • accuracy of the PFT.

Because of the digitization process, a PFT's accuracy is affected by the different geometries (such as translation, rotation, and scaling) under which the pattern of interest is captured. PFT accuracy is only one of the elements determining system accuracy, but you can measure PFT accuracy separately by following these steps:

1. Acquire a real image and apply highly accurate geometrical transformations to it (for example, translate by 0.172 pixel and rotate by 32.44°).
2. Run the PFT on both the original and the transformed image. Compute the difference between poses (considering the transformation). This is the pose error.
3. Repeat the last step for a series of transformations and compute statistics on the results. A standard way to calculate accuracy uses the 3-sigma rule-three times the standard deviation of the pose error.

The future of PFTs relies on geometric techniques because of their ability to deal with a variety of disparate situations. Nevertheless, you must understand both your application and appropriate evaluation criteria to select the right PFT to meet your needs.


For more information please contact us.

 

 

If you like this page, please recommend and share it.

Facebook Twitter More