When using Pixelink cameras with OpenCV, we recommend using the Pixelink SDK to first capture a frame, and then passing the captured frame to OpenCV for processing, as shown in the examples below. As OpenCV does not support standard machine vision interfaces, like USB3 Vision, using the SDK will instead allow you to access the camera, and adjust/control camera features.
Please note: The OpenCV Video I/O module, and VideoCapture classes, support Direct Show, and therefore our cameras can be used with OpenCV through this interface. However, access to camera features is extremely limited when using Direct Show.
Using C++, the following is an example of capturing an image from a mono camera, and converting the frame to a format that can be manipulated by OpenCV. The openCVSnapshot code sample, included with version 10.6 of the SDK, also provides a more in depth example that can be used with both mono and colour cameras.
PXL_RETURN_CODE rc = PxLGetNextFrame(hCamera, (U32)frameBuffer.size(), &frameBuffer[0], &frameDesc); if (API_SUCCESS(rc)) { rc = PxLFormatImage(&frameBuffer[0], &frameDesc, IMAGE_FORMAT_RAW_MONO8, &imageBuffer[0], &imageBufferSize); if (API_SUCCESS(rc)) { // 'convert' the image to one that openCV can manipulate Mat openCVImage((int)(frameDesc.Roi.fHeight/frameDesc.PixelAddressingValue.fVertical), (int)(frameDesc.Roi.fWidth/ frameDesc.PixelAddressingValue.fHorizontal), pixelType == CV_8UC1, &imageBuffer[0]); if (openCVImage.data) { // Do openCV manipulations on the matrix here. } } }
Using .NET and OpenCVSharp, the following code illustrates how to get a frame from a mono camera, and convert the frame to OpenCV Mat format for further processing.
ReturnCode rc = Api.GetNextFrame(cam.m_hCamera, buffer.Length, buffer, ref frameDesc); OpenCvSharp.Mat frame = new OpenCvSharp.Mat((int)frameHeight, (int)frameWidth, OpenCvSharp.MatType.CV_8UC1, buffer);