Summary of API Functions
The table below lists the API functions alphabetically. For API functions listed by functional group review Summary of API Functions. Designers should consult the referenced topics for detailed descriptions of the API functions.
|API Function Name||Description|
Initialize a specific controller, and assign it to a specific camera.
Create a descriptor for the specified camera.
Convert a Pixelink data stream (.pds) file to a video file (.avi).
Convert either an uncompressed Pixelink data stream (.pds) file, or a H264 compressed data stream, into a video file (.avi)
Convert a raw frame residing in an image buffer to an image file (.bmp, .tif, .psd, .jpg).
Check if there are outstanding scheduled action commands.
Get the list of the features supported by the specified camera.
Get information about the specified camera.
Returns version information about the Pixelink hardware and firmware.
Retrieve the XML file used by a camera.
Get a video clip and save it as a Pixelink data stream (.pds) intermediate file.
Returns the current value of the camera’s ‘clock’ that it uses to timestamp images.
Saves an encoded (compressed) video clip to a file.
Returns details about the last error that occurred.
Get the current value of the specified feature.
Get the next image frame from the camera and put it in an image buffer.
Get the number of cameras currently connected to the bus.
Get the number of cameras connected to the computer, including IP cameras that do not have a valid IP address.
Get the number of controllers connected to the computer.
Initialize a camera and return the camera handle.
Initialize a camera in its entirety and obtain the camera handle for subsequent API function calls.
Load settings from non-volatile memory on the camera.
Remove the descriptor from the specified camera.
Reset the size of the preview window to the size of the streaming video (thereby optimizing display performance).
Save the current settings to non-volatile memory on the camera.
Trigger a specific action in the camera.
Specify a callback function to modify the video data in the preview window or as it is translated into an end-user format.
Set the IP address of a specified IP camera.
Set a name for the specified camera.
Set callback that is called whenever the specified event occurs in the specified camera.
Set the value of the specified feature.
Set the preview window settings for the specified camera.
Set the current state of the preview window to playing, stopped, or paused.
Similar to PxLSetPreviewState, but accommodates a callback function, called when certain windows based operations are performed to the preview window.
Set the current state of the video stream to stopped, started or paused.
Releases a particular controller, allowing it to be assigned to another camera.
Uninitialize the specified camera.
Set the update mode for the specified descriptor.
Summary of API Features
The following figure lists the API Features supported by our cameras alphabetically.
Designers should consult the referenced topics for detailed descriptions of the API Features.
|API Feature Name||Description|
This read-only feature allows the user to query the camera to see what frame rate the camera will use while streaming image data.
When enabled, this feature will place an upper bound on the amount of aggregate bandwidth the camera may use for image data.
Brightness controls the black level in the image by applying an offset voltage to the pixels before the analog to digital conversion.
Extended shutter allows for a multiple slope integration to extend the dynamic range of the camera.
Flip controls the orientation of the image. The image can either be flipped horizontally or flipped vertically.
Controls the amount of focus being applied to the Varioptic liquid lens on the camera. You can perform a "One time Auto Focus" or set the focus manually within the range of the specific lens. You may edit the upper and lower limits of this range to help speed up your focus.
The frame interval and the required bandwidth on the communication bus are fixed by the Frame Rate value. The available frame rate range depends on the bus technology, the current video format, shutter speed, ROI and/or the video mode.
The gain controls the amplification of the image for the camera.
When enabled, puts the camera in a High Dynamic Range (HDR) mode of operation, whereby images can be produced that have enhanced detail discernment in dark areas of the image, without compromising brighter areas of the image. The resultant images can be produced automatically by the camera, or the user can request both ‘dark’ and ‘bright’ versions of the same image, and the application can use its own HDR algorithms to produce a resultant image.
Gamma controls the contrast in the image and is typically used in microscopy to improve the perceived dynamic range.
General Purpose Output (GPO) signals are controlled by this feature. A total of 4 GPO signals can be controlled on all PL-B board level cameras and 2 GPO signals can be controlled on all USB3 cameras.
This feature can be used to slow down the camera's imaging rate (frame rate), by user specified factor. This has the effect of reducing the camera's allowable frame rates by this factor. This feature is most useful if your computer is not able to process all of the frames (due to resource limitations within the computer) being sent from a camera operating at even its lowest frame rate setting.
The Lookup Table (LUT) typically has a number of 2-byte entries that range in value from 0 to 1023 (10-bit depth) or 0 to 4095 (12-bit depth). The LUT is used to implement the Gamma Feature but it can also be used to implement any LUT transfer function required.
This feature allows an application to determine the maximum number of bits the camera can use to represent a pre-formatted (or raw) pixel value.
This feature allows an application to control the maximum packet size that the software can receive.
The memory channel feature stores all camera parameters into non-volatile memory. It is similar to camera configuration files used in frame grabbers and software packages such as Labview. But here the file resides with the camera, not the host PC.
The pixel addressing feature reduces the number of pixels that are read from the ROI. Pixel Addressing is controlled by two parameters - a Pixel Addressing mode and a value.
The Pixel Format refers to the output formatted pixel. In cases where the camera's raw pixel size is larger than the output, the data is truncated and the least significant bits are lost. In cases where the camera's raw pixel size is smaller than the output, the least significant bits in the output data are padded with zeros.
Defines how the API should interpret the pixel format PIXEL_FORMAT_HSV4_12. This interpretation will impact the APIs preview capability, as well as the image formatting / conversions that are done (on PIXEL_FORMAT_HSV4_12 frames) via PxLFormatImage.
Mechanism to reduce any of the 4 polar channels, thus reducing influence light polarized to that specific channel, will have on the pixel value(s). This feature is of particular value when used with PIXEL_FORMAT_POLAR4_12, in so far as the feature can be used, for instance, to see individual polar channels. For example, if a the f45Value were set to 100, while f0Value, f90Value, and f135Value are all set to 0, then pixel value will only be influenced by light polarized to 45 degrees (clockwise from horizontal), and of course, unpolarized light. Note that when this feature is combined with PIXEL_FORMAT_POLAR4_12, the pixel values are normalized (see note below).
This feature allows 2 or more cameras on the same network, to synchronize their clocks to a ‘Master’ clock using the IEEE1588 / PTPv2 protocol. Pixelink cameras are capable if becoming the master clock on the network, as negotiated via the PTP protocol. An application gets timestamps from a cameras clock via the function PxLGetCurrentTimestamp. What more, a camera timestamp is also returned in the descriptor of each frame received, and with certain other operations, such as event callbacks.
Region of Interest (ROI) is a feature of most CMOS sensors that allows only a portion of the active sensor to be selected and read out. The benefit of this is a reduction in the total number of pixels and an increase in the readout speed. Often referred to as windowing, the ROI is defined by a top and left pixel as well as a width and height.
Rotate controls the rotation of the image. The image can either be rotated by 90, 180 or 270 degrees in the clockwise direction.
Saturation controls the intensity of the hues in the image. The saturation control allows the hue to be changed from full mono to more than twice the normal. If saturation is set to 100, it has no effect. With saturation set to zero, the colour camera behaves as a monochrome camera.
The sharpness feature is a standard convolution filter applied to the intensity (luma) channel as where is the Laplacian of the pixel luma at location (i,j). The Laplacian is implemented as a 3x3 convolution kernel filter as follows.
A set of control information the camera uses to calculate the SharpnessScore of an image. The SharpnessScore of the image, is returned in the SharpnessScore field of the FRAME_DESC structure that is returned with each image capture.
The shutter feature controls the exposure time of the sensor. Increasing the shutter integration time makes the image brighter. On CMOS sensors, increasing the shutter integration time will also increase the amount of noise in the image.
This feature can be used to put the camera in to (and out of) a 'special' mode of operation.
The Temperature feature is a read only feature that provides an indication of the temperature of the sensor chip. This is important because sensor performance is related to the temperature of the sensor. Typically, read noise can double for every 10 degree increase in temperature. In warm environments, the temperature sensor can be used to assess the effectiveness of the mounting hardware at removing heat form the camera.
Trigger controls the response of the camera to an external trigger input. Trigger functionality is required for industrial and machine vision applications where the timing of the image capture is determined by external events. The trigger can operate in a number of modes which provide flexibility when interfacing the camera with external equipment.
Trigger with Controlled Lighting allows control over the manner in which rolling shutter or fast-reset shutter cameras function when using a trigger. In normal operation, a rolling shutter camera is resetting, exposing and reading out information concurrently. This results in consistent exposure times and faster frame rates. However, since the rolling shutter is only exposing a portion of the sensor at a time, this mode is not effective at stop motion imaging.
White balance defines the colour temperature of the light source. Calibrations are performed for 3200 K (incandescent), 5000 K (daylight 1) and 6500 K (daylight 2). The camera uses this information to select from one of a number of possible colour correction matrices. Turning the White Balance off disables the colour correction.
The White shading feature provides control over the inpidual red, green and blue channel gains so that non-standard colour balance can be achieved. One-push Auto will attempt to white balance the gains (match the histogram peaks of the brightest area in each colour channel) based on the image data in the current ROI.