Type
Structure
Description
The following structure defines the frame descriptor, which contains the settings for the captured frame.
/* Frame Descriptor */ typedef struct _FRAME_DESC { U32 uSize; float fFrameTime; U32 uFrameNumber; struct _Brightness{ float fValue; } Brightness; struct{ float fValue; } AutoExposure; struct{ float fValue; } Sharpness; struct{ float fValue; } WhiteBalance; struct{ float fValue; } Hue; struct{ float fValue; } Saturation; struct{ float fValue; } Gamma; struct{ float fValue; } Shutter; struct{ float fValue; } Gain; struct{ float fValue; } Iris; struct{ float fValue; } Focus; struct{ float fValue; } Temperature; struct{ float fMode; float fType; float fPolarity; float fDelay; float fParameter; } Trigger; struct{ float fValue; } Zoom; struct{ float fValue; } Pan; struct{ float fValue; } Tilt; struct{ float fValue; } OpticalFilter; struct{ float fMode[PXL_MAX_STROBES]; float fPolarity[PXL_MAX_STROBES]; float fParameter1[PXL_MAX_STROBES]; float fParameter2[PXL_MAX_STROBES]; float fParameter3[PXL_MAX_STROBES]; } GPIO; struct{ float fValue; } FrameRate; struct{ float fLeft; float fTop; float fWidth; float fHeight; } Roi; struct{ float fHorizontal; float fVertical; } Flip; struct{ float fValue; } Decimation; struct{ float fValue; } PixelFormat; struct{ float fKneePoint[PXL_MAX_KNEE_POINTS]; } ExtendedShutter; struct{ float fLeft; float fTop; float fWidth; float fHeight; } AutoROI; struct{ float fValue; } DecimationMode; struct{ float fRedGain; float fGreenGain; float fBlueGain; } WhiteShading; struct{ float fValue; } Rotate; struct{ float fValue; } ImagerClkDivisor; /* Added to slow down imager to support slower frame rates */ struct{ float fValue; } TriggerWithControlledLight; struct{ float fValue; } MaxPixelSize; /* The number of bits used to represent 16-bit data (10 or 12) */ struct{ float fValue; } TriggerNumber; /* Valid only for hardware trigger mode 14. It identifies the frame number in a particular trigger sequence */ struct{ U32 uMask; } ImageProcessing; /* Bit mask describing processing that was performed on the image */ struct{ float fHorizontal; float fVertical; } PixelAddressingValue; /* Valid only for cameras with independent X & Y Pixel Addressing */ double dFrameTime; /* Same as fFrameTime, but with better resolution/capacity */ U64 u64FrameNumber; /* Same as uFrameNumber, but with greater capacity */ struct{ float fValue; } BandwidthLimit; /* Upper bound on the amount of bandwidth the camera can use (in mb/s) */ struct{ float fValue; } ActualFrameRate; /* The frame rate (in frames/sec) being used by the camera */ struct{ float fLeft; float fTop; float fWidth; float fHeight; float fMaxValue; } SharpnessScoreParams; /* Controls calculation of SharpnessScore. Valid ony for those cameras that support FEATURE_SHARPNESS_SCORE */ struct{ float fValue; } SharpnessScore; /* SharpnessScore of this image. Valid only for those cameras that support FEATURE_SHARPNESS_SCORE */ struct{ U32 uMode; /* The type of HDR applied to the image (if any) */ float fDarkGain; /* Gain used for dark pixel components */ float fBrightGain; /* Gain used for bright pixel components */ } HDRInfo; /* The type of HDR applied to the image (if any) */ struct{ U32 uCFA; /* Type of color filter array used (if color camera) */ float f0Weight; /* Contributions of 0 degree polarized light (horizontal). In percentage */ float f45Weight; /* Contributions of 45 degree polarized light (CW from horizontal). In percentage */ float f90Weight; /* Contributions of 90 degree polarized light (vertical). In percentage */ float f135Weight; /* Contributions of 135 degree polarized light (CW from horizontal). In percentage */ U32 uHSVInterpretation; /* How PIXEL_FORMAT_HSV4_12 should be interpreted */ } PolarInfo; /* Polar sub-channel weighting factors (if supported) */ struct{ float fCompressionStrategy; /* Type of compression used (0 == no compression) */ float fCompressionRate; /* Total number of bytes in the compressed image (or image size if uncompressed) */ } CompressionInfo; /* Compression information for this specific PixelFormat.fValue */ } FRAME_DESC, *PFRAME_DESC;
FRAME_DESC and Corresponding FEATURE_
Most fields are a simple representation of the corresponding camera feature, and can be accessed via the fX (e.g. fValue) element of the appropriate struct (with casting where necessary).
e.g. float exposureTime = frameDesc.Shutter.fValue; U32 roiLeft = (U32)frameDesc.Roi.fLeft; U32 pixelAddressingMode = (U32)frameDesc.DecimationMode.fValue; U32 pixelAddressingValue = (U32) frameDesc.Decimation.fValue; int isHorizontallyFlipped = (int) frameDesc.Flip.fHorizontal; // or bool isHorizontallyFlipped = (0.0f == frameDesc.Flip.fHorizontal) ? false : true;
Some fields don't have a corresponding feature, or it may not be obvious what the corresponding feature is:
FRAME_DESC field | Corresponding Camera Feature |
---|---|
uSize | None. See below. |
fFrameTime | None. See below. |
uFrameNumber | None. See below. |
AutoExposure | None. Ignore. (This field is a vestige of the 3.2 API) |
Decimation | FEATURE_DECIMATION / FEATURE_PIXEL_ADDRESSING |
DecimationMode | FEATURE_DECIMATION / FEATURE_PIXEL_ADDRESSING |
TriggerNumber | Only supported on certain models of cameras (PL-C720 series). |
ImageProcessing | Only supported on certain models of cameras (PL-C720 series). |
CompressionInfo | FEATURE_COMPRESSION. The fCompressionStrategy reflects the compression scheme currently being used for the current pixel format (if any). fCompressionRate provides a per-image measurement of the compression ratio achieved. |
uSize Field
This field is used to indicate to the API which version of the FRAME_DESC your code was compiled with. Before a FRAME_DESC is passed to PxLGetNextFrame, this field must be populated with the size of the FRAME_DESC using sizeof(FRAME_DESC) or some equivalent.
e.g. FRAME_DESC frameDesc; frameDesc.uSize = sizeof(frameDesc); PXL_RETURN_CODE rc = PxLGetNextFrame(hCamera, bufferSize, pFrame, &frameDesc);
The API will use the value in uSize to:
Determine which fields of the FRAME_DESC should be populated. i.e. IF the FRAME_DESC passed in is an older (and therefore smaller) version, the API will not write past the end of the FRAME_DESC supplied.
Report back to the caller, the number of bytes copied to the FRAME_DESC. This will tell the application if it's dealing with an older version of the API which does not support all the fields of the FRAME_DESC.
Note: The uSize field must be initialized before passing a FRAME_DESC pointer to each and every call to PxLGetNextFrame.
e.g. FRAME_DESC frameDesc; frameDesc.uSize = sizeof(frameDesc); PXL_RETURN_CODE rc = PxLGetNextFrame(hCamera, bufferSize, pFrame, &frameDesc); // More code here // Reusing the frameDesc variable, so reinitialize the uSize field. frameDesc.uSize = sizeof(frameDesc); rc = PxLGetNextFrame(hCamera, bufferSize, pFrame, &frameDesc);
fFrameTime and uFrameNumber - PL-A and PL-B Camera Models
fFrameTime is a monotonic increasing value reflecting the time (in seconds) since streaming was started. For FireWire and GigE cameras, this field is reset to zero each and every time streaming is started. For USB cameras this field is set to 0 once -- the first time streaming is started -- and is not reset until the camera is rebooted. Therefore, in the generic case, if you want to know the time since the streaming was started you must record the fFrameTime of the first FRAME_DESC received after streaming has begun, and use this as start time.
uFrameNumber has similar behavior, except GigE does NOT reset the count with each stream start. Note that uFrameNumber is a count of the frames that have arrived at the host, not a count of frames from the sensor. Buffer overflows between the camera and the host will not cause this number to skip. This number will skip if frames are not read from the API in a timely manner.
e.g. PxSetStreamState(hCamera, START_STREAM); FRAME_DESC frameDesc; frameDesc.uSize = sizeof(frameDesc); PXL_RETURN_CODE rc = PxLGetNextFrame(hCamera, bufferSize, pFrame, &frameDesc); const float fTimeOfFirstFrame = frameDesc.fFrameTime; const U32 uFirstFrameNumber = frameDesc.uFrameNumber; // ... // ... // Later in the code we grab another frame. frameDesc.uSize = sizeof(frameDesc); PXL_RETURN_CODE rc = PxLGetNextFrame(hCamera, bufferSize, pFrame, &frameDesc); float fTimeSinceFirstFrame = frameDesc.fFrameTime - fTimeOfFirstFrame; U32 numFramesSinceFirstFrame = frameDesc.uFrameNumber = uFirstFrameNumber;
fFrameTime and uFrameNumber - PL-C Camera Models
Older cameras (PL-A/B/E/H) acquire the fFrameTime and uFrameNumber relative to the host bus (1394/GigE) or bus driver (USB). That is, these numbers are associated with the image as it is received off of the host bus. PL-C and newer cameras, have additional on-camera hardware that allow these numbers to be associated with images as they are captured from the imaging sensor, thus providing more accuracy than previously possible. Thus, the fFrameTime and uFrameNumber fields from images sourced from a PL-C (and newer) camera, are derived based on these hardware values. The units remain the same as older cameras (seconds and ordinal value for fFrameTime and uFrameNumber respecitively), but their reset behavior and roll-over values, are dependent on the specific camera platform.
dFrameTime and u64FrameNumber - PL-D Camera Models
dFrameTime and u64FrameNumber is a higher resolution variants of fFrameTime and uFrameNumber respectively – they both capture the same information. The small resolution variants are maintained for backwards compatibility only, new applications should use the higher resolution variants. Also, newer PixelAddressing.fHorizontal and PixelAddress.ing.fVertical contain information similar to the older Decimation field, in so far as it provides information on the amount of image reduction that is occurring due to pixel addressing, but the newer scheme can accommodate asymmetric pixel addressing. That is, a different pixel addressing factor can be applied to each of the horizontal and vertical dimensions. Note that not all cameras support asymmetric pixel addressing. For these cameras, the fHorizontal and fVertical values will always be the same. Again, the older Decimation field is provided for backwards compatibility; newer applications should use the fHorizontal and fVertical fields.
typedef struct _CAMERA_INFO { S8 VendorName [33]; S8 ModelName [33]; S8 Description [256]; S8 SerialNumber[33]; S8 FirmwareVersion[12]; S8 FPGAVersion[12]; S8 CameraName[256]; S8 XMLVersion[12]; // New as of Release 9 } CAMERA_INFO, *PCAMERA_INFO;
Release 9 adds the XMLVersion field to the CAMERA_INFO structure. This field, and new subsequent field appended afterwards, are only filled in with the new (for Release 9) function PxLGetCameraInfoEx. Furthermore, this field will only be filled in if the particular camera has an XML file (currently, only USB3 Vision and GigE Vision cameras have an XML file). If this field is completed, it will contain a string describing the version of this camera’s XML file. The format of the string is “%02X.%02X.%02X”.
dFrameTime and u64FrameNumber - PL-X Camera Models
PL-X cameras use the same dFrameTime and u64FrameNumber definitions as PL-D cameras. However, PL-X cameras have a couple of unique features that can impact the value of the dFrameTime. Specifically:
FEATURE_PTP allows 2 more cameras on the same network, to synchronize their clocks to a ‘Master’ clock using the IEEE1588 / PTPv2 protocol. Once synchronized, the dFrameTime used by all cameras, are from the same time base (the Master clock).
PL-X Cameras support a new FEATURE_TRIGGER type; TRIGGER_TYPE_ACTION (3). All cameras configured with this trigger type, will capture an image when instructed to do so via the function PxLSetActions. This technique can be used to capture images from multiple cameras at the same instance in time.
Especially as it relates to simultaneous captures of images from multiple cameras using TRIGGER_TYPE_ACTION, it’s important to recognise that the camera(s) will latch the dFrameTime when the imager has the first pixel available – which would be after the camera’s exposure time. So, for example, say we had 2 identical models of cameras, each configured identically except for exposure; Camera A is using an exposure of 20 ms, while camera B is using an exposure of 120 ms. In this example, you can expect the dFrameTime of the images returned from camera A and camera B to differ by 100 ms.
Streaming
Snapshots of FRAME_DESC fields for some camera features are taken when streaming is started, and are used to populate all FRAME_DESCs passed to PxLGetNextFrame thereafter. This means that any changes to these features while the camera is streaming will not be reflected in a FRAME_DESC until streaming is stopped and started again. These fields are:
FEATURE_SENSOR_TEMPERATURE
FEATURE_GPIO
FEATURE_FLIP
FEATURE_ROTATE
For example, when PxLSetStreamState is called with START_STREAM, FEATURE_TEMPERATURE is recorded by the API and that value will be used to populate all FRAME_DESCs until streaming is stopped and started again. (As with any feature, the actual current value of FEATURE_TEMPERATURE can be read with PxLGetFeature).
Reminder: Some camera features cannot be changed while streaming, so they are effectively fixed when streaming is started. These features are:
FEATURE_LOOKUP_TABLE
FEATURE_FRAME_RATE
FEATURE_PIXEL_FORMAT
FEATURE_PIXEL_ADDRESSING
FEATURE_ROI
FEATURE_IMAGER_CLK_DIVISOR
FEATURE_TRIGGER
FEATURE_TRIGGER_WITH_CONTROLLED_LIGHT
It is possible to programmatically determine if a feature can be changed while streaming by reading the feature's feature flags with PxLGetCameraFeatures and examining the state of the FEATURE_FLAG_SETTABLE_WHILE_STREAMING bit.
Usage
PxLFormatImage, PxLGetNextFrame