3

我正在尝试开发一个管道,用于在 Python 中使用 opencv 来稳定流体实验的图像。这是一个示例原始图像(实际尺寸:1920x1460)

该管道应该能够稳定低频漂移和高频“抖动”,在实验过程中阀门打开/关闭时偶尔会发生。按照此处的示例,我目前的方法是应用双边滤波器,然后进行自适应阈值处理以显示图像中的通道。然后,我使用 goodFeaturesToTrack 在阈值图像中查找角点。但是,由于图像角落的某些光学效应对比度低,因此图像中存在大量噪声。虽然我可以找到通道的角落,如图所示,但它们在帧间移动了很多,见这里. 我跟踪了每个帧中相对于从 calcOpticalFlowPyrLK 计算的第一帧的 x 和 y 像素偏移量,并使用 estimateRigidTransform 计算刚性变换,如图所示。在这个图中,我可以看到从 0:200 帧开始的低频漂移,以及在 ~225 帧左右的急剧跳跃。这些跳跃与视频中观察到的相符。然而,大量噪声(幅度约为 5-10 像素)与视频中观察到的不匹配。如果我将这些转换应用于我的图像堆栈,我会得到增加的抖动,这不会稳定图像。此外,如果我尝试计算一帧到下一帧的变换(而不是所有帧到第一帧),在处理少数帧后,我会得到None返回刚性变换矩阵,可能是因为噪声阻止了刚性变换的计算。

这是我如何计算转换的示例:

# Load required libraries
import numpy as np
from skimage.external import tifffile as tif
import os
import cv2
import matplotlib.pyplot as plt
from sklearn.externals._pilutil import bytescale

#Read in file and convert to 8-bit so it can be processed
os.chdir(r"C:\Path\to\my\processingfolder\inputstack")
inputfilename = "mytestfile.tif"
input_image = tif.imread(inputfilename)
input_image_8 = bytescale(input_image)
n_frames, vid_height, vid_width = np.shape(input_image_8)


transforms = np.zeros((n_frames-1,3),np.float32)
prev_image = starting_image
prev_f = cv2.bilateralFilter(prev_image,9,75,75)
prev_t = cv2.adaptiveThreshold(prev_f,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,49,2)
prev_pts = cv2.goodFeaturesToTrack(prev_t,maxCorners=100,qualityLevel=0.5,minDistance=10,blockSize=25,mask=None)

for i in range(1,n_frames-2):

    curr_image = input_image_8[i]
    curr_f = cv2.bilateralFilter(curr_image,9,75,75)
    curr_t = cv2.adaptiveThreshold(curr_f,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,49,2)

    #Detect features through optical flow:
    curr_pts, status, err = cv2.calcOpticalFlowPyrLK(prev_t,curr_t,prev_pts,None)

    #Sanity check 
    assert len(prev_pts) == len(curr_pts)
    #Filter to only the valid points
    idx = np.where(status==1)[0]
    prev_pts = prev_pts[idx]
    curr_pts = curr_pts[idx]

    #Find transformation matrix
    m = cv2.estimateRigidTransform(prev_pts,curr_pts, fullAffine=False) #will only work with OpenCV-3 or less

    # Extract translation
    dx = m[0,2]
    dy = m[1,2]

    # Extract rotation angle
    da = np.arctan2(m[1,0], m[0,0])

    # Store transformation
    transforms[i] = [dx,dy,da]

    print("Frame: " + str(i) +  "/" + str(n_frames) + " -  Tracked points : " + str(len(prev_pts)))

如何以不同的方式处理我的图像,以便挑选出这些通道的线条而不会在检测角落时产生噪音?这种稳定/对齐不需要即时发生,它可以在事后应用于整个堆栈。

4

0 回答 0