This chapter gives detailed information on the FPGA-Designs of the FESTO SBOx camera. Please keep in mind, that the FPGA functionality is basically different between cameras with greyscale sensor and color sensor versions. This is due to the fact that the normally used preprocessing steps between greyscale and Bayer-Pattern based (color) sensors are widely different.
For important information about using the FPGA-functions in applications see here.
Beside the functional units the datapath switching concept is the base of the FESTO SBOx cameras FPGA Design. Due to the datapath switching ability the FPGA is not only able to transfer data straight forward from the sensor to the processor, but also from/to the FPGA-ImageRAM.
A datapath starts always from a single source and ends in one or two sinks. The FPGAs functional blocks can be switched on/off in a datapath. It is important that not all possible combinations seen in FPGA Greyscale Block Diagram and FPGA Color Block Diagram can be used in reality. All possible datapaths are listed below.
The FPGA contains an SDRAM-Controller interfacing the 32MB SDRAM module exclusively attached to the FPGA.
In comparison to the high level image buffer management implemented in the camlink.ko
using the cameras main memory, there exists no memory management handling the FPGA-ImageRAM. The advantage of this general approach is that an application has the freedom of choice what to do with the FPGA-ImageRAM. The obvious disadvantage is that the application also has to do the necessary memory management by itself.
So this memory can be exemplarily used for:
For handling the FPGA-ImageRAM out of an application the API provides the following operations:
Furthermore two operation modes have to be mentioned separately: Hardware Image Correction and Data Splitting. In both modes the FPGA-ImageRAM (or parts of it) is exclusively used. See also Notes on FPGA Design, API and Programming and the programming examples.
FPGA-ImageRAM and Acquisition modes: When accessing the FPGA-ImageRAM from your application you have to make sure that ACQ_MODE_SINGLE_SHOT or ACQ_MODE_STOP is used. Other acquisition modes continuously transfer data from the sensor to the driver's image buffer blocking the CPUs DMA channel. If you want to use other acquisition modes, you have to stop them before executing any FPGA-ImageRAM related API calls.
Important notes on using the FPGA ImageRAM
The selection of the correct datapath itself does not need to be done by the programmer. It is up to the camlink driver to select the needed datapath depending on the sequence of the API-calls. So no detailed knowledge about the FPGA internals is needed.
The greyscale FPGA-Design provides the following functions:
The following block diagram shows the functional FPGA components and the possible datapaths over them.
The following Datapaths are supported by the greyscale FPGA-Designs
[]…optional, |…or
The color FPGA-Design provides the following functions:
The following block diagram shows the functional FPGA components and the possible datapaths over them.
The following Datapaths are supported by the color FPGA-Designs
[]…optional, |…or
The FPGA design contains a generic 3×3-kernel allowing to apply various types of filters that can be used for preprocessing. The following filters are implemented:
On a color camera it is not possible to apply one of the filters directly to the sensors Bayer-Pattern image (this makes no sense). Therefore when you want to use the filters you also have to enable the Bayer2X unit and select one color component, to which the filter is applied.
Important Note: Due to limited FPGA resources on the R2 Sensor the filters cannot coexist with the laser scanning unit. Therefore in the default FPGA design the filters are not included. If you want to use them, please contact your distributor to get a FPGA Design containing the 3×3 filters instead of the laser-scanning unit.
On greyscale cameras this unit allows the binarization of unfiltered or filtered sensor image. On color cameras binarization can be done after the Bayer2X unit or after Bayer2X + Filter unit.
The binarization itself is described as follows:
The white balancing unit allows executing color corrections on the image data outputted by the sensor. This correction is done with help of user configurable lookup tables. There exist independent tables for the red, green and blue color components of the sensors Bayer image.
The correction unit allows combining sensor data with data stored in the FPGA-SDRAM in a byte-wise manner. The following operations are supported:
For using this feature, the application software has to do the following steps:
Important note: when the correction unit is enabled, the FPGA-SDRAM is exclusively used, and therefore all other FPGA-SDRAM related functions cannot be used!
Please refer also to the programming examples section.
The so called Bayer2X unit allows you to convert a bayer-pattern-image into the common RGB or HSV/I color space. The input of the Bayer2X unit is always a bayer pattern image. The source can either be the sensor or the FPGA-SDRAM.
As output one of the following color-channels can be selected:
The calculation of the RGB-components is based on a 3×3-filter kernel of the input bayer pattern image. The RGB pixel value is calculated as a mean value of the according color pixels found in the 3×3-structure. The calculation of HSVI-components is based on the RGB-value of one pixel.
HUE is normalized to H Є [0,192] and divided to the three color areas H[0-63] RED H[64-127] GREEN H[128-192] BLUE H = 0 when R=G=B
The Saturation S, the Value V and the Intensity I are normalized to [0,255] V = max(R,G,B) I = (R+G+B)/3.012
The greyscale JPEG Encoder allows real-time JPEG compression from greyscale sensor images or compression of images stored in the FPGA-SDRAM. The output stream of the JPEG-Encoder is is always transferred to the CPU. For receiving a complete standard .jpg image (or file) the software has to add the necessary headers to the data stream. This can be done by using the XXXX library functions. The 0xff byte stuffing can be done by the FPGA.
Contrary to the greyscale JPEG encoder the color JPEG encoder allows only compression of Bayer-Pattern and greyscale images stored into the FPGA-SDRAM. The output of the FPGA-color-JPEG encoder is a standard JPEG file format (JFIF) compatible 4:1:1 subsampled, interleaved data stream. Please keep in mind that the color-JPEG encoder always uses the FPGA-SDRAM as source for the Bayer-Pattern image, and forwards the output stream to the CPU. This means, that compared to the greyscale encoder(on a greyscale camera) it is not possible to acquire JPEG-images via the GetImage() API call. So it is the task of the application software to:
As for the greyscale encoder, also for the color encoder it is up to the application to add/handle the headers needed by a .jpg file.
For more details on using the color JPEG encoder please refer to the programming examples section.
performance:
The color JPEG encoder is able to compress about 27 Megapixel of raw data per second. This means, that the compression of a 640*480 bayer pattern image takes about ~11ms. This is the pure coding time, not including the grabbing of the image or any other necessary transfer times to the FPGA-SDRAM.
other comments:
Data splitting can be used to concurrently transfer an acquired sensor image to the CPU and store the same image in the FPGA SDRAM.
Notes:
Please refer to the chapter Laser Profiling for details on the FPGA laser profiling algorithm.
Important note: Due to limited FPGA resources on the R2 Sensor the 3×3-filters cannot coexist with the laser scanning unit. Therefore in the default FPGA design the filters are not included. If you want to use them, please contact your distributor to get a FPGA Design containing the 3×3 filters instead of the laser-scanning unit.
Cameras with color sensor are provided with a different FPGA-Design, than cameras with greyscale sensor. These FPGA-Designs are not compatible to each other, so in general before you start programming you should know in advance for which type of camera your application is developed.
As mentioned in Datapath concept, it is the responsibility of the camlink.ko
driver module to set the FPGAs datapath. Finally the selected datapath depends on the sequence of API calls and from the application programmer's point of view the API/driver combination can be seen as some sort of state machine. To give an overview about the calls, which are possible in a situation, the following two graphics show the connection between the temporal sequence of API calls, “pseudo driver states” and possible data transfer calls. The starting point is always the RAW state.
As it is derivable from above images, not all image- and datapath-related API calls (eg. ReadImageRAM(), SetFilter(), etc.) can be executed anytime. There may also be dependencies on the sequence of the calls. Here we strongly recommend to check the functions return values.
If a data path switching error occurs, it may be also helpful to have a look at the error trace buffer by executing “cat /proc/trace”
on the cameras console.