This document describes the layout of information within a frame of image data populated by a call to the Pixelink API's function PxLGetNextFrame or a call back routine (See PxLSetCallback). The layout in a frame is based primarily on the camera's pixel format, which can be found via the PixelFormat field of the Frame Descriptor structure that is returned by PxLGetNextFrame or passed to the callback routine. The pixel format will typically be one of the following:
Mono:
Color:
Polar (for polar cameras only):
MONO8
Each sensor pixel is represented as an 8-bit DN value and takes up 8 bits (1 byte). The frame is organized as an array of sensor pixels, starting at top left, moving left to right, top to bottom.
Example
PL-B771 camera configured for:
MONO8
ROI is 8X8
The camera is looking at a bright white image and all pixels are saturated (i.e. at max value).
Looking at the frame data in memory on a byte-by-byte basis:
0x00A92D20 ff ff ff ff ff ff ff ff
0x00A92D28 ff ff ff ff ff ff ff ff
0x00A92D30 ff ff ff ff ff ff ff ff
0x00A92D38 ff ff ff ff ff ff ff ff
0x00A92D40 ff ff ff ff ff ff ff ff
0x00A92D48 ff ff ff ff ff ff ff ff
0x00A92D50 ff ff ff ff ff ff ff ff
0x00A92D58 ff ff ff ff ff ff ff ff
...
MONO16
Each sensor pixel is represented as a 10-bit or 12-bit digital number (DN). The number of bits (10 or 12) depends on the capabilities of the camera and can be determined by querying the feature FEATURE_MAX_PIXEL_SIZE. Note that this feature is supported only in API versions 6.18 or later. If this feature is not supported by your camera, only 10-bit values are supported.
Whether 10- or 12-bit data, the data from each sensor pixel takes up 16 bits (2 bytes).
The frame is organized as an array of sensor pixels, starting at top left, moving left to right, top to bottom.
The camera always uses the uppermost bits of a 16 bit value. On a 10-bit camera, the pixel values range from 0x0000 to 0xFFC0 (the bottom 6 bits aren't used), whereas on a 12-bit camera the pixel values range from 0x0000 to 0xFFF0 (the bottom 4 bits aren't used).
The 16-bit data from the camera arrives into the computer in big-endian order. Because Wintel (Window & Intel/Intel compatible) computers are little-endian, a simple read of a 16-bit pixel value will result in a value that is byte-swapped.
Example
PL-B771 camera configured for:
MONO16
ROI is 8X8
camera supports-10 bit data
The camera is looking at a bright white image so that all pixels are saturated. For 10-bit data we would expect each pixel to have a value of 1023 (0x3FF).
Looking at the frame in memory on a byte-by byte basis (hex values):
0x00A94FF8 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0
0x00A95008 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0
0x00A95018 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0
0x00A95028 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0
0x00A95038 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0
0x00A95048 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0
0x00A95058 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0
0x00A95068 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0 ff c0
Looking at the data above in big-endian order, we see that the first pixel has a value of 0xFFC0. In other words, all 10 uppermost bits of the 16-bit value are set. However, we can't do big-endian reads on Wintel computers; a 16-bit (little-endian) read of the first pixel will return a value of 0xC0FF. To get the expected value of 0x03FF requires a bit of bit twiddling.
// Read the first pixel value
U16 value = *((U16*)0x00A94FF8);
// Step 1 - Convert from big-endian to little-endian
U16 correctValue = ((value & 0xFF00) >> 8) | ((value & 0x00FF) << 8);
// Step 2 - 'Normalize' to get rid of unused bits
// For 10 bits, bottom 6 bits aren't used. (16-10)
// For 12 bits, bottom 4 bits aren't used. (16-12)
correctValue >>= 6;
MONO12_PACKED
Each sensor pixel is represented as a 12-bit DN value and takes up 12 bits such that it takes 3 bytes to represent 2 pixels. The frame is organized as an array of sensor pixels, starting at top left, moving left to right, top to bottom.
Every 2 pixels are 'packed' into 3 bytes as follows:
1. The first byte contains the 8 most significant bits of the first pixel.
2. The second byte contains the 4 least significant bits of the second pixel, followed by the 4 least significant bits of the first pixel.
3. The third byte contains the 8 most significant bits of the second pixel.
The astute reader will notice that this packing scheme puts the most significant bits of each pixel into a unique data byte, which makes for easy conversion to 8-bit images by ‘ignoring’ bytes that only contain the least significant pixel data.
Example: camera configured for:
MONO12_PACKED
ROI is 8x8
The camera is looking at a checkerboard pattern image. The pixels are alternating between black and white (max values). Let each square represent 1 pixel.
The packing scheme of first two pixels: 1111 1111 | 0000 1111 | 0000 0000
The frame data in memory on a byte-by-byte basis:
0x00A92D20 ff 0f 00 ff 0f 00 ff 0f 00 ff 0f 00
0x00A92D2C 00 f0 ff 00 f0 ff 00 f0 ff 00 f0 ff
0x00A92D38 ff 0f 00 ff 0f 00 ff 0f 00 ff 0f 00
0x00A92D44 00 f0 ff 00 f0 ff 00 f0 ff 00 f0 ff
0x00A92D50 ff 0f 00 ff 0f 00 ff 0f 00 ff 0f 00
0x00A92D5C 00 f0 ff 00 f0 ff 00 f0 ff 00 f0 ff
0x00A92D68 ff 0f 00 ff 0f 00 ff 0f 00 ff 0f 00
0x00A92D74 00 f0 ff 00 f0 ff 00 f0 ff 00 f0 ff
MONO12_PACKED_MSFIRST
Each sensor pixel is represented as a 12-bit DN value and images are packed such that it takes 3 bytes to represent 2 pixels. The frame is organized as an array of sensor pixels, starting at top left, moving left to right, top to bottom.
Every 2 pixels are 'packed' into 3 bytes as follows:
1. The first byte contains the 8 most significant bits of the first pixel.
2. The second byte contains the 8 most significant bits of the second pixel.
3. The third byte contains the 4 least significant bits of the second pixel, followed by the 4 least significant bits of the first pixel.
Similar to the MONO12_PACKED format, this packing scheme also facilitates easy conversion to 8-bit formats by ‘ignoring’ bytes that only contain the least significant pixel data.
Example: camera configured for:
MONO12_PACKED_MSFIRST
ROI is 8X8.
The camera is looking at a checkerboard pattern image. The pixels are alternating between black and white (max values). Let each square represent 1 pixel.
The packing scheme of first two pixels: 1111 1111 | 0000 0000 | 0000 1111
The frame data in memory on a byte-by-byte basis:
0x00A92D20 ff 00 0f ff 00 0f ff 00 0f ff 00 0f
0x00A92D2C 00 ff f0 00 ff f0 00 ff f0 00 ff f0
0x00A92D38 ff 00 0f ff 00 0f ff 00 0f ff 00 0f
0x00A92D44 00 ff f0 00 ff f0 00 ff f0 00 ff f0
0x00A92D50 ff 00 0f ff 00 0f ff 00 0f ff 00 0f
0x00A92D5C 00 ff f0 00 ff f0 00 ff f0 00 ff f0
0x00A92D68 ff 00 0f ff 00 0f ff 00 0f ff 00 0f
0x00A92D74 00 ff f0 00 ff f0 00 ff f0 00 ff f0
BAYER8
Each sensor pixel is represented as an 8-bit DN value that takes up 8 bits (1 byte). The frame is organized as an array of sensor pixels, starting at top left, moving left to right, top to bottom. Atop the sensor is a Bayer filter: a colour filter array (CFA) which limits which colours are seen by an individual sensor pixel. The Bayer pattern will be either Green-Red-Blue-Green (GRBG) or Green-Blue-Red-Green (GBRG), depending on the sensor. The variation of BAYER8 in use can be determined from the PixelFormat field of the Frame Descriptor. For example:
Pixel format 3 = PIXEL_FORMAT_BAYER*_GRBG
Pixel format 8 = PIXEL_FORMAT_BAYER*_GBRG
See Appendix A for a list of cameras and the Bayer pattern they use.
Example
PL-B742 camera configured for:
BAYER8
ROI of 8X8
The camera is looking at a bright pure red image. The pixel format for the camera is PIXEL_FORMAT_BAYER8_GBRG. Looking at the frame data in memory on a byte-by-byte basis:
0x00A92B98 00 00 00 00 00 00 00 00
0x00A92BA0 ff 00 ff 00 ff 00 ff 00
0x00A92BA8 00 00 00 00 00 00 00 00
0x00A92BB0 ff 00 ff 00 ff 00 ff 00
0x00A92BB8 00 00 00 00 00 00 00 00
0x00A92BC0 ff 00 ff 00 ff 00 ff 00
0x00A92BC8 00 04 00 00 00 00 00 00
0x00A92BD0 ff 00 ff 00 ff 00 ff 00
...
Using the same configuration, but now looking at a bright blue image we see this data:
0x00A92B98 00 ff 00 ff 00 ff 00 ff
0x00A92BA0 00 00 00 00 00 00 00 00
0x00A92BA8 00 ff 00 ff 00 ff 00 ff
0x00A92BB0 00 00 00 00 00 00 00 00
0x00A92BB8 00 ff 00 ff 00 ff 00 ff
0x00A92BC0 00 00 00 00 00 00 00 00
0x00A92BC8 00 ff 00 ff 00 ff 00 ff
0x00A92BD0 00 00 00 00 00 00 00 00
Using the same configuration, but now looking at a bright green image:
0x00A92B98 ff 00 ff 00 ff 00 ff 00
0x00A92BA0 00 ff 00 ff 00 ff 00 ff
0x00A92BA8 ff 00 ff 00 ff 00 ff 00
0x00A92BB0 00 ff 00 ff 00 ff 00 ff
0x00A92BB8 ff 00 ff 00 ff 00 ff 00
0x00A92BC0 00 ff 00 ff 00 ff 00 ff
0x00A92BC8 ff 00 ff 00 ff 00 ff 00
0x00A92BD0 00 ff 00 ff 00 ff 00 ff
...
From this it can be seen that the CFA over this sensor is arranged as:
0x00A92B98 G1 BB G1 BB G1 BB G1 BB
0x00A92BA0 RR G2 RR G2 RR G2 RR G2
0x00A92BA8 G1 BB G1 BB G1 BB G1 BB
0x00A92BB0 RR G2 RR G2 RR G2 RR G2
0x00A92BB8 G1 BB G1 BB G1 BB G1 BB
0x00A92BC0 RR G2 RR G2 RR G2 RR G2
0x00A92BC8 G1 BB G1 BB G1 BB G1 BB
0x00A92BD0 RR G2 RR G2 RR G2 RR G2
...
With PIXEL_FORMAT_BAYER*_GBRG, the GBRG refers to the upper-left quartet of sensor pixels, and this quartet (Bayer pattern) is repeated over the entire sensor.
BAYER16
Each sensor pixel is represented as an 10-bit DN value that takes up 16 bits (2 bytes). For the organization of an individual pixel's bits within the 16-bits, see MONO16. As with BAYER8, there's a CFA above the sensor. The Bayer pattern will be either GRBG or GBRG, depending on the sensor.
Pixel format 4 = PIXEL_FORMAT_BAYER16_GRBG
Pixel format 11 = PIXEL_FORMAT_BAYER16_GBRG
See Appendix A for a list of cameras and the Bayer pattern they use.
Example
PL-B742 camera configured for:
BAYER16
ROI of 8x8
Camera is looking at a bright pure red image. The pixel format for the camera is PIXEL_FORMAT_BAYER16_GBRG
Looking at the frame data in memory on a byte-by-byte basis:
0x00A92B98 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x00A92BA8 ff c0 00 00 ff c0 00 00 ff c0 00 00 ff c0 00 00
0x00A92BB8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x00A92BC8 ff c0 00 00 ff c0 00 00 ff c0 00 00 ff c0 00 00
0x00A92BD8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x00A92BE8 ff c0 00 00 ff c0 00 00 ff c0 00 00 ff c0 00 00
0x00A92BF8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x00A92C08 ff c0 00 00 ff c0 00 00 ff c0 00 00 ff c0 00 00
...
BAYER12_PACKED
Each sensor pixel is represented as a 12-bit DN value that takes up 12 bits such that it takes 3 bytes to represent 2 pixels. For the organization of an individual pixel's bits within the 12-bits, see MONO12_PACKED. As with BAYER8, there's a CFA above the sensor. The Bayer pattern will be either GRBG, RGGB, or GBRG depending on the sensor.
Pixel format 14 = PIXEL_FORMAT_BAYER16_GRBG
Pixel format 15 = PIXEL_FORMAT_BAYER16_RGGB
Pixel format 16 = PIXEL_FORMAT_BAYER16_GBRG
See Appendix A for a list of cameras and the Bayer pattern they use.
Example
Camera Configured for:
BAYER12_PACKED
ROI of 8x8
Camera is looking at a bright pure red image. The pixel format for the camera is PIXEL_FORMAT_BAYER12_PACKED_GBRG
Looking at the frame data in memory on a byte-by-byte basis:
0x00A92B98 00 00 00 00 00 00 00 00 00 00 00 00
0x00A92BA4 ff 0f 00 ff 0f 00 ff 0f 00 ff 0f 00
0x00A92BB0 00 00 00 00 00 00 00 00 00 00 00 00
0x00A92BBC ff 0f 00 ff 0f 00 ff 0f 00 ff 0f 00
0x00A92BC8 00 00 00 00 00 00 00 00 00 00 00 00
0x00A92BD4 ff 0f 00 ff 0f 00 ff 0f 00 ff 0f 00
0x00A92BE0 00 00 00 00 00 00 00 00 00 00 00 00
0x00A92BEC ff 0f 00 ff 0f 00 ff 0f 00 ff 0f 00
BAYER12_PACKED_MSFIRST
Each sensor pixel is represented as a 12-bit DN value and images are packed such that it takes 3 bytes to represent 2 pixels. For the organization of an individual pixel's bits within the 12-bits, see MONO12_PACKED_MSFIRST. As with BAYER8, there's a CFA above the sensor. The Bayer pattern will be either GRBG, RGGB, GBRG, or BGGR depending on the sensor.
Pixel format 21 = PIXEL_FORMAT_BAYER12_GRBG_PACKED_MSFIRST
Pixel format 22 = PIXEL_FORMAT_BAYER12_RGGB_PACKED_MSFIRST
Pixel format 23 = PIXEL_FORMAT_BAYER12_GBRG_PACKED_MSFIRST
Pixel format 24 = PIXEL_FORMAT_BAYER12_BGGR_PACKED_MSFIRST
See Appendix A for a list of cameras and the Bayer pattern they use.
Example
camera configured for:
BAYER12_PACKED_MSFIRST
ROI of 8x8
Camera is looking at a bright pure red image. The pixel format for the camera is PIXEL_FORMAT_BAYER12_PACKED_GBRG.
Looking at the frame data in memory on a byte-by-byte basis:
0x00A92B98 00 00 00 00 00 00 00 00 00 00 00 00
0x00A92BA4 ff 00 0f ff 00 0f ff 00 0f ff 00 0f
0x00A92BB0 00 00 00 00 00 00 00 00 00 00 00 00
0x00A92BBC ff 00 0f ff 00 0f ff 00 0f ff 00 0f
0x00A92BC8 00 00 00 00 00 00 00 00 00 00 00 00
0x00A92BD4 ff 00 0f ff 00 0f ff 00 0f ff 00 0f
0x00A92BE0 00 00 00 00 00 00 00 00 00 00 00 00
0x00A92BEC ff 00 0f ff 00 0f ff 00 0f ff 00 0f
YUV422
The camera converts each sensor pixel to a YUV (aka YCbCr) triplet. But, rather than transmitting the entire triplet for each individual sensor pixel, the following pattern is sent:
U Y | V Y
This is for two pixels of 8 bits each. For the first pixel, only the U and Y values are sent. For the second pixel, only the V and Y values are sent.
Example
PL-B686CF camera configured for:
YUV422
ROI of 16x16
Camera is looking at a bright pure red image. According to the image statistics - found in the histogram tool of Pixelink Capture OEM:
Average Y is 76 (0X4C)
Average U is 85 (0X55)
Average V is 255 (0XFF)
The Frame Descriptor Pixel Format is 2, i.e. PIXEL_FORMAT_YUV422. Looking at the frame data in memory on a byte-by-byte basis:
0x00A95178 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A951A8 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A951D8 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A95208 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A95238 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A95268 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A95298 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A952C8 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A952F8 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A95328 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A95358 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A95388 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A953B8 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A953E8 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A95418 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
0x00A95448 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
...
The UYVY pattern can be seen below:
0x00A95178 55 4c ff 4c 55 4c ff 4c 55 4c ff 4c
U Y V Y U Y V Y U Y V Y
RGB24 (NON-DIB)
Each sensor pixel is represented by three 8-bit DN values, for a total 3 bytes or 24 bits. Additionally, each of the 8-bit values represents a specific color channel; either red, green or blue. For RGB24, the order of the channels is Red-Green-Blue. The frame is organized as an array of sensor pixels starting from top left, moving left to right, top to bottom (last row last). This is so called non-DIB format, which is different from a Windows Device Independent Bitmap (Windows DIB) format. So, the first byte for the first pixel (top left on the sensor) will be a red channel, and the last byte of the last pixel, will be a blue channel.
NOTE: When saving camera images on the computer using an RGB derived image format (such as BMP files), you need to take care to use the correct vertical orientation (DIB vs NON-DIB). See the function PxLFormatImage for details.
BGR24
Each sensor pixel occupies 24 bits, with each byte (8-bits) corresponding to one color channel: blue, green and red. The frame is organized as an array of sensor pixels starting from top left, moving left to right, top to bottom (last row last). Each sensor pixel is represented by three 8-bit DN values, for a total 3 bytes or 24 bits. Additionally, each of the 8-bit values represents a specific color channel; either red, green or blue. For BGR24, the order of the channels is Blue-Green-Red. The frame is organized as an array of sensor pixels starting from top left, moving left to right, top to bottom (last row last). So, the first byte for the first pixel (top left on the sensor) will be a blue channel, and the last byte of the last pixel, will be a red channel.
RGBA and BGRA
Each sensor pixel occupies 32 bits, with each byte (8-bits) corresponding to one channel: blue, green, red, and alpha. The frame is organized as an array of sensor pixels starting from top left, moving left to right, top to bottom (last row last). Each sensor pixel is represented by four 8-bit DN values, for a total of 4 bytes or 32 bits. For RGBA, the channel order is Red-Green-Blue-Alpha. For BGRA, the channel order is Blue-Green-Red-Alpha.
The ‘alpha’ component of a pixel refers to the ‘translucency’ of the pixel as it is displayed. A pixel that has an alpha value of 0 will be completely transparent. Conversely, a pixel with a maximum alpha value (255) would be completely opaque. The only image formats that support the alpha formats are raw data and PNG. If you save a frame as a format other than these options, it will be converted to RGB24 pixel format.
Please note that the alpha formats are only available on PL-X cameras.
ARGB and ABGR
Each sensor pixel occupies 32 bits, with each byte (8-bits) corresponding to one channel: alpha, blue, green, and red. The frame is organized as an array of sensor pixels starting from top left, moving left to right, top to bottom (last row last). Each sensor pixel is represented by four 8-bit DN values, for a total of 4 bytes or 32 bits. For ARGB, the channel order is Alpha-Red-Green-Blue. For BGRA, the channel order is Alpha-Blue-Green-Red.
The 'alpha’ component of a pixel refers to the ‘translucency’ of the pixel as it is displayed. A pixel that has an alpha value of 0 will be completely transparent. Conversely, a pixel with a maximum alpha value (255) would be completely opaque. The only image formats that support the alpha formats are raw data and PNG. If you save a frame as a format other than these options, it will be converted to RGB24 pixel format.
Please note that the alpha formats are only available on PL-X cameras.
PIXEL_FORMAT_STOKES4_12
Used only with Polar Cameras. Each pixel occupies 48 bits, 12 bits for each of 4 Stokes channels. Specifically, Stokes S0 channel (represented twice per pixel) is the average value of sub-pixels 0-degrees and 90-degrees((I0 + I90)/2). S1 is the difference between sub-pixels 0-degrees and 90-degrees (I0-I90). And S2 is the difference between sub-pixels 45-degree and 135-degrees (I45-I135). S0 is an unsigned 12-bit value, while S1 and S2 are signed 12-bit values, a sign bit + 11 data bits (using 2’s complement notation). The data is ‘packed’ such that each two 12 bit channels occupies 3 bytes. Furthermore, a pixel is represented as a 2x2 grid of the 4 Stokes channels, are show in the following diagram.
Example
camera configured for:
STOKES4_12
ROI of 4x4
This diagram shows 16 pixels, orientated as a 4x4 grid, and would require 96 bytes to represent the image (8 rows of 12 bytes). Furthermore, the byte sequence of the first row will be:
Most significant 8 bits of S1 – Pixel 1
Most significant 8 bits of S0 – Pixel 1
Least significant 4 bits of S1 in the least significant 4 bits, and the least significant 4 bits of S0 in the most significant 4 bits – Pixel 1
Most significant 8 bits of S1 – Pixel 2
…
PIXEL_FORMAT_POLAR4_12
Used only with Polar Cameras. Each pixel occupies 48 bits, 12 bits representing normalized average of each of the polar channels (I0, I45, I90 , and I135), repeated 4 times per pixel. See FEATURE_POLAR_WEIGHTINGS for details on how that average is computed. The data is ‘packed’ such that each two 12 bit values occupies 3 bytes. Furthermore, a pixel is represented as a 2x2 grid of the repeated 12-bit value, are show in the following diagram.
Example
camera configured for:
POLAR4_12
ROI of 4x4
This diagram shows 16 pixels, orientated as a 4x4 grid, and would require 96 bytes to represent the image (8 rows of 12 bytes). Furthermore, the byte sequence of the first row will be:
Most significant 8 bits of P1 – Pixel 1
Duplicate copy of most significant 8 bits of P1 – Pixel 1
Least significant 4 bits of P1 in the least and most significant 4 bits – Pixel 1
Most significant 8 bits of P2 – Pixel 2
…
For this pixel format, odd rows area a duplicate of the preceding row.
PIXEL_FORMAT_POLAR_RAW4_12
Used only with Polar Cameras. Each pixel occupies 48 bits, 12 bits representing each of the 4 polar channels (I0, I45, I90 , and I135). This amounts to the ‘raw’ pixel values as output from the polar imaging sensor. Note however, the polar channel values may be augmented via FEATURE_POLAR_WEIGHTINGS if so desired. The data is ‘packed’ such that each two 12 bit values occupies 3 bytes. Furthermore, a pixel is represented as a 2x2 grid of each of the polar channels, as show in the following diagram.
Example
Camera configured for:
POLAR_RAW4_12
ROI of 4x4
This diagram shows 16 pixels, orientated as a 4x4 grid, and would require 96 bytes to represent the image (8 rows of 12 bytes). Furthermore, the byte sequence of the first row will be:
Most significant 8 bits of I90 – Pixel 1
Most significant 8 bits of I45 – Pixel 1
Least significant 4 bits of I90 in the least significant 4 bits, and the least significant 4 bits of I45 in the most significant 4 bits – Pixel 1
Most significant 8 bits of I90 – Pixel 2
...
PIXEL_FORMAT_HSV4_12
Used only with Polar Cameras. Each pixel occupies 48 bits, 4 12 bit values representing the intensity (or polar value), polar degree, and polar angle (intensity is represented 2 times per pixel) as computed as follows:
Intensity (V) = (I0 + I90) / 2
Polar Angle (H/A) = sqrt((I0-I90)**2 + (I45-I135)**2) / ((I0 + I90) / 2)
Polar Degree (S/D) = 0.5 * arctan((I45-I135) / (I0-I90))
So that V and S/D are unsigned 12 bit values value between 0 and 4095, and H/A is a 12 bit value between 0 and 180. The data is ‘packed’ such that each two 12 bit values occupies 3 bytes. Furthermore, a pixel is represented as a 2x2 grid of the 4 12 bit values, are show in the following diagram.
Example
camera configured for:
HSV4_12
ROI of 4x4
This diagram shows 16 pixels, orientated as a 4x4 grid, and would require 96 bytes to represent the image (8 rows of 12 bytes). Furthermore, the byte sequence of the first row will be:
Most significant 8 bits of H/A – Pixel 1
Most significant 8 bits of V– Pixel 1
Least significant 4 bits of H/A in the least significant 4 bits, and the least significant 4 bits of V in the most significant 4 bits – Pixel 1
Most significant 8 bits of H/A – Pixel 2
...
Note that the API allows for multiple interpretations of this pixel format, allowing the user to ‘highlight’ certain attributes of the polarization. See FEATURE_POLAR_HSV_INTERPRETATION for more details.
Appendix A - Bayer Patterns Used by Pixelink Cameras
The Bayer pattern depends only upon the internal sensor used. The descriptor accompanying each frame will identify the pixel format with the correct CFA.