4

我正在使用 ctypes 从 National Instruments (NI-IMAQ) 访问图像采集 API。在其中,有一个调用的函数imgBayerColorDecode(),我在从imgSnap()函数返回的拜耳编码图像上使用它。我想将解码后的输出(即 RGB 图像)与我将根据原始数据创建的一些 numpy ndarray 进行比较,这就是 imgSnap 返回的内容。

但是,有2个问题。

第一个很简单:将 imgSnap 返回的 imgbuffer 传递给一个 numpy 数组。现在首先有一个问题:如果你的机器是 64 位的并且你有超过 3GB 的 RAM,你不能用 numpy 创建数组并将它作为指向 imgSnap 的指针传递。这就是为什么你必须实现一个解决方法,这在 NI 的论坛(NI ref - first 2 posts)上有所描述:禁用错误消息(下面附加代码中的第 125 行:)imaq.niimaquDisable32bitPhysMemLimitEnforcement并确保它是创建内存的 IMAQ 库图像所需的 ( imaq.imgCreateBuffer)。在那之后,这个食谱上的应该能够再次将缓冲区转换为 numpy 数组。但我不确定我是否对数据类型进行了正确的更改:相机有 1020x1368 像素,每个像素强度以 10 位精度记录。它通过 CameraLink 返回图像,我假设它以每像素 2 个字节执行此操作,以便于数据传输。这是否意味着我必须调整另一个 SO 问题中给出的配方:

buffer = numpy.core.multiarray.int_asbuffer(ctypes.addressof(y.contents), 8*array_length)
a = numpy.frombuffer(buffer, float)

对此:

bufsize = 1020*1368*2
buffer = numpy.core.multiarray.int_asbuffer(ctypes.addressof(y.contents), bufsize)
a = numpy.frombuffer(buffer, numpy.int16)

第二个问题是 imgBayerColorDecode() 没有给我我期望的输出。下面是两张图片,第一张是 imgSnap 的输出,用imgSessionSaveBufferEx(). 第二个是imgSnap经过imgBayerColorDecode()的去马赛克后的输出。

  • 原始数据:i42.tinypic.com/znpr38.jpg
  • 拜耳解码:i39.tinypic.com/n12nmq.jpg

如您所见,拜耳解码的图像仍然是灰度图像,而且它与原始图像不一样(这里小说明,图像已缩放以使用 imagemagick 上传)。原始图像是在某个面具前使用红色滤光片拍摄的。从它(和其他 2 个滤色器)中,我知道拜耳滤色器在左上角看起来像这样:

BGBG
GRGR

我相信我在将正确类型的指针传递给 imgBayerDecode 时做错了,我的代码附在下面。

#!/usr/bin/env python
from __future__ import division

import ctypes as C
import ctypes.util as Cutil
import time


# useful references:
# location of the niimaq.h: C:\Program Files (x86)\National Instruments\NI-IMAQ\Include
# location of the camera files: C:\Users\Public\Documents\National Instruments\NI-IMAQ\Data
# check it C:\Users\Public\Documents\National Instruments\NI-IMAQ\Examples\MSVC\Color\BayerDecode

class IMAQError(Exception):
    """A class for errors produced during the calling of National Intrument's IMAQ functions.
    It will also produce the textual error message that corresponds to a specific code."""

    def __init__(self, code):
        self.code = code
        text = C.c_char_p('')
        imaq.imgShowError(code, text)
        self.message = "{}: {}".format(self.code, text.value)
        # Call the base class constructor with the parameters it needs
        Exception.__init__(self, self.message)


def imaq_error_handler(code):
    """Print the textual error message that is associated with the error code."""

    if code < 0:
        raise IMAQError(code)
        free_associated_resources = 1
        imaq.imgSessionStopAcquisition(sid)
        imaq.imgClose(sid, free_associated_resources)
        imaq.imgClose(iid, free_associated_resources)
    else:
        return code

if __name__ == '__main__':
    imaqlib_path = Cutil.find_library('imaq')
    imaq = C.windll.LoadLibrary(imaqlib_path)


    imaq_function_list = [  # this is not an exhaustive list, merely the ones used in this program
        imaq.imgGetAttribute,
        imaq.imgInterfaceOpen,
    imaq.imgSessionOpen,
        imaq.niimaquDisable32bitPhysMemLimitEnforcement,  # because we're running on a 64-bit machine with over 3GB of RAM
        imaq.imgCreateBufList,
        imaq.imgCreateBuffer,
        imaq.imgSetBufferElement,
        imaq.imgSnap,
        imaq.imgSessionSaveBufferEx,
        imaq.imgSessionStopAcquisition,
        imaq.imgClose,
        imaq.imgCalculateBayerColorLUT,
        imaq.imgBayerColorDecode ]

    # for all imaq functions we're going to call, we should specify that if they
    # produce an error (a number), we want to see the error message (textually)
    for func in imaq_function_list:
        func.restype = imaq_error_handler




    INTERFACE_ID = C.c_uint32
    SESSION_ID = C.c_uint32
    BUFLIST_ID = C.c_uint32
    iid = INTERFACE_ID(0)
    sid = SESSION_ID(0)
    bid = BUFLIST_ID(0)
    array_16bit = 2**16 * C.c_uint32
    redLUT, greenLUT, blueLUT  = [ array_16bit() for _ in range(3) ]
    red_gain, blue_gain, green_gain = [ C.c_double(val) for val in (1., 1., 1.) ]

    # OPEN A COMMUNICATION CHANNEL WITH THE CAMERA
    # our camera has been given its proper name in Measurement & Automation Explorer (MAX)
    lcp_cam = 'JAI CV-M7+CL'
    imaq.imgInterfaceOpen(lcp_cam, C.byref(iid))
    imaq.imgSessionOpen(iid, C.byref(sid)); 

    # START C MACROS DEFINITIONS
    # define some C preprocessor macros (these are all defined in the niimaq.h file)
    _IMG_BASE = 0x3FF60000

    IMG_BUFF_ADDRESS = _IMG_BASE + 0x007E  # void *
    IMG_BUFF_COMMAND = _IMG_BASE + 0x007F  # uInt32
    IMG_BUFF_SIZE = _IMG_BASE + 0x0082  #uInt32
    IMG_CMD_STOP = 0x08  # single shot acquisition

    IMG_ATTR_ROI_WIDTH = _IMG_BASE + 0x01A6
    IMG_ATTR_ROI_HEIGHT = _IMG_BASE + 0x01A7
    IMG_ATTR_BYTESPERPIXEL = _IMG_BASE + 0x0067  
    IMG_ATTR_COLOR = _IMG_BASE + 0x0003  # true = supports color
    IMG_ATTR_PIXDEPTH = _IMG_BASE + 0x0002  # pix depth in bits
    IMG_ATTR_BITSPERPIXEL = _IMG_BASE + 0x0066 # aka the bit depth

    IMG_BAYER_PATTERN_GBGB_RGRG = 0
    IMG_BAYER_PATTERN_GRGR_BGBG = 1
    IMG_BAYER_PATTERN_BGBG_GRGR = 2
    IMG_BAYER_PATTERN_RGRG_GBGB = 3
    # END C MACROS DEFINITIONS

    width, height = C.c_uint32(), C.c_uint32()
    has_color, pixdepth, bitsperpixel, bytes_per_pixel = [ C.c_uint8() for _ in range(4) ]

    # poll the camera (or is it the camera file (icd)?) for these attributes and store them in the variables
    for var, macro in [ (width, IMG_ATTR_ROI_WIDTH), 
                        (height, IMG_ATTR_ROI_HEIGHT),
                        (bytes_per_pixel, IMG_ATTR_BYTESPERPIXEL),
                        (pixdepth, IMG_ATTR_PIXDEPTH),
                        (has_color, IMG_ATTR_COLOR),
                        (bitsperpixel, IMG_ATTR_BITSPERPIXEL) ]:
        imaq.imgGetAttribute(sid, macro, C.byref(var))  


    print("Image ROI size: {} x {}".format(width.value, height.value))
    print("Pixel depth: {}\nBits per pixel: {} -> {} bytes per pixel".format(
        pixdepth.value, 
        bitsperpixel.value, 
        bytes_per_pixel.value))

    bufsize = width.value*height.value*bytes_per_pixel.value
    imaq.niimaquDisable32bitPhysMemLimitEnforcement(sid)

    # create the buffer (in a list)
    imaq.imgCreateBufList(1, C.byref(bid))  # Creates a buffer list with one buffer

    # CONFIGURE THE PROPERTIES OF THE BUFFER
    imgbuffer = C.POINTER(C.c_uint16)()  # create a null pointer
    RGBbuffer = C.POINTER(C.c_uint32)()  # placeholder for the Bayer decoded imgbuffer (i.e. demosaiced imgbuffer)
    imaq.imgCreateBuffer(sid, 0, bufsize, C.byref(imgbuffer))  # allocate memory (the buffer) on the host machine (param2==0)
    imaq.imgCreateBuffer(sid, 0, width.value*height.value * 4, C.byref(RGBbuffer))

    imaq.imgSetBufferElement(bid, 0, IMG_BUFF_ADDRESS, C.cast(imgbuffer, C.POINTER(C.c_uint32)))  # my guess is that the cast to an uint32 is necessary to prevent 64-bit callable memory addresses
    imaq.imgSetBufferElement(bid, 0, IMG_BUFF_SIZE, bufsize)
    imaq.imgSetBufferElement(bid, 0, IMG_BUFF_COMMAND, IMG_CMD_STOP)

    # CALCULATE THE LOOKUP TABLES TO CONVERT THE BAYER ENCODED IMAGE TO RGB (=DEMOSAICING)
    imaq.imgCalculateBayerColorLUT(red_gain, green_gain, blue_gain, redLUT, greenLUT, blueLUT, bitsperpixel)


    # CAPTURE THE RAW DATA 

    imgbuffer_vpp = C.cast(C.byref(imgbuffer), C.POINTER(C.c_void_p))
    imaq.imgSnap(sid, imgbuffer_vpp)
    #imaq.imgSnap(sid, imgbuffer)  # <- doesn't work (img produced is entirely black). The above 2 lines are required
    imaq.imgSessionSaveBufferEx(sid, imgbuffer,"bayer_mosaic.png")
    print('1 taken')


    imaq.imgBayerColorDecode(RGBbuffer, imgbuffer, height, width, width, width, redLUT, greenLUT, blueLUT, IMG_BAYER_PATTERN_BGBG_GRGR, bitsperpixel, 0) 
    imaq.imgSessionSaveBufferEx(sid,RGBbuffer,"snapshot_decoded.png");

    free_associated_resources = 1
    imaq.imgSessionStopAcquisition(sid)
    imaq.imgClose(sid, free_associated_resources )
    imaq.imgClose(iid, free_associated_resources )
    print "Finished"

跟进:在与NI 代表讨论后,我确信第二个问题是由于 imgBayerColorDecode 在 2012 年发布之前仅限于 8 位输入图像(我们正在研究 2010 年)。但是,我想确认一下:如果我将 10 位图像转换为 8 位图像,只保留最重要的字节,并将这个转换版本传递给 imgBayerColorDecode,我希望看到 RGB 图像。

为此,我将 imgbuffer 转换为 numpy 数组并将 10 位数据移动 2 位:

np_buffer = np.core.multiarray.int_asbuffer(
    ctypes.addressof(imgbuffer.contents), bufsize)
flat_data = np.frombuffer(np_buffer, np.uint16)

# from 10 bit to 8 bit, keeping only the non-empty bytes
Z = (flat_data>>2).view(dtype='uint8')[::2] 
Z2 = Z.copy()  # just in case

现在我将 ndarray Z2 传递给 imgBayerColorDecode:

bitsperpixel = 8
imaq.imgBayerColorDecode(RGBbuffer, Z2.ctypes.data_as(
    ctypes.POINTER(ctypes.c_uint8)), height, width, 
    width, width, redLUT, greenLUT, blueLUT, 
    IMG_BAYER_PATTERN_BGBG_GRGR, bitsperpixel, 0)

请注意,原始代码(如上所示)已略微更改,因此 redLUt、greenLUT 和 blueLUT 现在只有 256 个元素数组。最后我打电话imaq.imgSessionSaveBufferEx(sid,RGBbuffer, save_path)。但是它仍然是灰度并且没有保留img形状,所以我仍然在做一些非常错误的事情。有任何想法吗?

4

1 回答 1

0

经过一番尝试,事实证明提到的 RGBbuffer 必须保存正确的数据,但imgSessionSaveBufferEx此时正在做一些奇怪的事情。

当我将数据从 RGBbuffer 传递回 numpy 时,将这个一维数组重塑为图像的维度,然后通过屏蔽和使用位移操作(例如red_channel = (np_RGB & 0XFF000000)>>16)将其拆分为颜色通道,然后我可以将其保存为漂亮的彩色图像使用 PIL 或 pypng 的 png 格式。

我还没有发现为什么 imgSessionSaveBufferEx 行为奇怪,但上面的解决方案有效(即使在速度方面它确实效率低下)。

于 2013-06-25T07:50:07.917 回答