Moving vision to 64-bit
Written by Yvon Bouchard, DALSA, Montreal and edited by Marc Fimeri, Adept Electronic Solutions, Australia
31 March 2009
A nearly-geometric growth in the data requirements for many machine vision applications is pushing 32-bit processing to its limits. The challenge is not in processing power, however, but in addressing memory buffers as systems fill them with ever-increasing volumes of data. Moving vision systems to 64-bit operation can solve the data challenge, but will require fully-updated hardware and software support.
Not long ago the electronics industry thought “who would ever need more than 4 Gbytes of memory in a PC?” Yet that ceiling, set by the 32-bit limit that many operating systems set on the address space they will handle, is rapidly becoming an impediment to machine vision applications. Several factors are pushing machine vision data requirements against that ceiling, including image size, throughput demands, and increasing use of color.
The image sizes needed for machine vision applications – in terms of memory requirement - are increasing for several reasons. One is simply larger objects to be inspected. Another is the need for multiple cameras and images. An inspection system that must examine a populated printed circuit board (PCB), for instance, may require thousands of images taken from different angles and different positions in order to inspect different aspects of the board, such as chip lead positions, printing and other markings, solder joint quality, and the like.
Size increases also stem from increasing demands for higher image resolution. Inspection of a flat-panel display, for instance, must be able to resolve objects that continue to shrink as panels evolve toward high-definition. This requires more camera pixels per image inch, which compounds the image size growth that stems from increasing display panel sizes. Compounding is also at work in inspection systems that must work with three-dimensional (3D) measurements such as of solder paste applied to a PCB. The thickness of paste on a PCB depends on the type of component to be mounted and so varies across the board. To make accurate depth measurements, the image must have a resolution 10 to 100 times greater than the measurement accuracy required.
Along with increasing image size the machine vision industry must address continual demands for faster inspection throughput. Because the inspection throughput directly affects manufacturing productivity, time really is money. Thus, continuous inspection systems using line cameras must not only provide more pixels per line and more lines per inch, they must scan more inches per second – filling memory very quickly. Area cameras also need to capture larger images more quickly and rapidly move them into storage for processing.
Increasing industry interest in color images is further compounding the growth in data storage requirements. Color images typically add three components to the image intensity of monochrome images - red, green, and blue color saturation – resulting in a data requirement as much as four times the size of comparable monochrome images. The vision system could derive the intensity information from just the three color components, but the computation required typically creates an unacceptable load on a system’s processing capacity.
Memory is the Barrier
The net effect of all these compounding factors is an exponential growth in data requirements for images that is pushing many systems beyond the addressable space of 32-bit operating systems. Performance is not as much of an issue. Today’s PCs use state-of-the-art processors that have as many as four processor cores on chip and are capable of handling data at rates of 600 to 700 Mbytes/sec. The advent of PCIexpress gives system backplanes the capacity to transfer data at 5 Gbytes/sec. These speeds are typically high enough to handle images as fast as they are acquired.
The machine vision process, however, works with pixels in blocks rather than one at a time. Thus, vision system inspection rates are based on average rather than continuous processing speeds. The system acquires an object’s image, begins processing, and finishes while the next object moves into the inspection area. In a wafer inspection system, for example, the camera takes an image of the wafer under test, sends that image to the vision system, and loads a new wafer as the vision system continues processing. Ideally, the vision system will complete its processing in the time it takes the wafer handling system to move the new wafer into place so that the handling system does not have to pause.
To achieve maximum processing efficiency, however, the vision processor must buffer data it its local memory (on-line) so that it does not have to wait for data to load. An external data storage device such as a disk drive is too slow to keep up with frame rate requirements, especially because such storage requires data to move twice: once to the drive then later to the vision system. In addition, disk drives have an overhead penalty that arises because they use a file structure for data access not the first-in, first-out (FIFO) data access that vision systems require. Finally, given the image size increases now occurring and the latency of image storage and retrieval, the drive would need to offer Terabytes of storage in order to provide adequate buffering. Drive systems of that size would be cost prohibitive.
On-line data buffering does not suffer these drawbacks. The system does not need to move information twice and is easily configured to store and retrieve data using FIFO access with no overhead penalty. Memory cost is also not a major issue. The performance of on-line storage is typically fast enough that buffering requirements reduce to two images at most (one incoming and one in processing) and with DRAM pricing now down to $10 to $12 per Gigabyte on-line storage is quite affordable.
Solving Buffer Issues
The challenge that vision systems face with these large image requirements is not so much performance as it is storage space. Many current systems need buffer sizes as large as 3.5 Gbytes. This is perilously close to the 4 Gbyte memory addressing limit of 32-bit processing, leaving little room for other system storage needs much less expansion in image size.
There are workarounds available – such as paging and virtual addressing - that extend the memory size a 32-bit system can handle. Such schemes use a two-step addressing system that calls first for selection of a “page” or block of memory to work within followed by normal memory access within that block. DALSA has offered support for such address extensions for years, working with programs such as Windows Server 2003 and Data Center. The problem with such memory extensions, however, is that they increase software complexity and overhead to manage the page addressing when accessing data, especially when a data access must move across page boundaries. The additional overhead works to limit vision system throughput.
The other solution to the memory size limit is to move the system design to the 64-bit addressing level. With 64 bits the directly addressable space increases from 4 Gbytes to nearly two hundred billion Gbytes. This is an essentially infinite memory space for systems to work within, limited only in practice by the cost of populating that space with physical memory.
Moving to 64-bits
Moving a system to embrace 64-bit operation, however, affects the entire system design. First, the hardware must support 64-bit operation. Most high-performance processors can handle the 64 bits, but peripheral devices must also. The vision system camera, for instance, will need to support 64-bit addresses although it does not have to use 64-bit data. Similarly, the frame grabber that buffers image data must allow 64-bit addressing. Legacy systems may thus need hardware upgrades in order to move to 64-bit operation.
As well as the hardware, the system software must work with 64-bit addresses and data words. The software involved includes the operating system (OS), hardware drivers, image processing libraries, and user applications code. The OS part is easy; Windows Vista is available both in a 32-bit and a 64-bit version (Vista64). The challenge lies in the other software elements.
For new designs the challenge is less significant as long as all the building blocks can be chosen to support 64-bit operation. If 32-bit legacy software is to be used, however, it will require some rework. Code that is written in a high-level language such as C can be ported to 64-bit operation by recompiling, a fairly painless transition. Drivers and libraries typically fall into this category, as does most applications code. Some legacy application software, however, is written in assembly language to maximize performance. Such hardware-specific custom code is the most difficult and expensive to migrate.
Fortunately for developers DALSA offers a full selection of hardware and software for designs requiring 64-bit operation. DALSA’s CameraLink series of frame grabbers and image processing boards, for instance, offer full 64-bit operation as well as high-speed PCIexpress links for camera attachment. Further, the company is actively migrating all its product lines to PCIexpress and 64-bit compatibility. Full software support in the form of drivers for the Vista64 operating system is available for DALSA cameras and frame grabbers and an application library for 64-bit operations is now coming on line.
DALSA Offers Full Support
Developers creating new code or porting legacy code to 64-bit operation also have access to a complete software development kit (SDK) from DALSA. This kit has seen more than a year of field testing by multiple clients, working through the challenges that inevitably arise with migration to a new system. As a result of this effort, the SDK is a highly stable, proven resource for developers.
The DALSA SDK includes two types of installation software: one for developers and one for system end-users. The developer installation software provides full access to tools and libraries under Vista64, providing seamless file installation on host development computers. The end-user installation package is available to form part of the developer’s deliverables to the customer, handling installation of application code and library functions to set up system operation without requiring a user interface. Both installations support either 32-bit or 64-bit operation so that developers retain that choice.
This ability to support both performance levels helps development teams prepare for migration to 64-bit operation without yet committing to it. Not all applications will ever need the full power of 64-bit systems and many that will eventually migrate do not need to do so now. The freedom to choose gives developers an opportunity to avoid the higher costs of 64-bit systems when they are not needed.
For most vision applications, however, increasing image size and throughput demands will ultimately push system memory requirements past the 4 Gbyte limit of 32-bit operation. When that happens, migration to 64-bit operation provides the simplest approach to handling large data sets as well as offering extensive growth room for system enhancement. DALSA offers developers a full suite of hardware and software for 64-bit systems that still support 32-bits. These offerings are fully field-tested and proven, ensuring that developers have a smooth transition to the power of 64-bit systems when application requirements push past legacy limits.
Meet the authors
Adept Electronic Solutions are 'The Machine Vision and Imaging Specialists' and distributor for Dalsa products in Australia and New Zealand. To find out more about this article or any Dalsa product please email us at: email@example.com, call us at Perth (08) 92425411 / Sydney (02) 99792599 / Melbourne (03) 95555621 or use our online contact us form.